00:00:00.001 Started by upstream project "autotest-per-patch" build number 130584 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.043 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:01:18.555 The recommended git tool is: git 00:01:18.555 using credential 00000000-0000-0000-0000-000000000002 00:01:18.556 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:01:18.568 Fetching changes from the remote Git repository 00:01:18.571 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:01:18.581 Using shallow fetch with depth 1 00:01:18.581 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:01:18.581 > git --version # timeout=10 00:01:18.592 > git --version # 'git version 2.39.2' 00:01:18.592 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:18.604 Setting http proxy: proxy-dmz.intel.com:911 00:01:18.604 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:01:37.123 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:01:37.136 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:01:37.148 Checking out Revision d37d6e8a0abef39b377a5f0531b43b2efbbebf34 (FETCH_HEAD) 00:01:37.148 > git config core.sparsecheckout # timeout=10 00:01:37.161 > git read-tree -mu HEAD # timeout=10 00:01:37.178 > git checkout -f d37d6e8a0abef39b377a5f0531b43b2efbbebf34 # timeout=5 00:01:37.201 Commit message: "pool: serialize build page context to json" 00:01:37.201 > git rev-list --no-walk d37d6e8a0abef39b377a5f0531b43b2efbbebf34 # timeout=10 00:01:37.358 [Pipeline] Start of Pipeline 00:01:37.375 [Pipeline] library 00:01:37.377 Loading library shm_lib@master 00:01:37.377 Library shm_lib@master is cached. Copying from home. 00:01:37.397 [Pipeline] node 00:01:52.401 Still waiting to schedule task 00:01:52.401 Waiting for next available executor on ‘vagrant-vm-host’ 00:14:14.993 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:14:14.995 [Pipeline] { 00:14:15.010 [Pipeline] catchError 00:14:15.013 [Pipeline] { 00:14:15.027 [Pipeline] wrap 00:14:15.035 [Pipeline] { 00:14:15.044 [Pipeline] stage 00:14:15.046 [Pipeline] { (Prologue) 00:14:15.067 [Pipeline] echo 00:14:15.069 Node: VM-host-SM38 00:14:15.076 [Pipeline] cleanWs 00:14:15.086 [WS-CLEANUP] Deleting project workspace... 00:14:15.086 [WS-CLEANUP] Deferred wipeout is used... 00:14:15.091 [WS-CLEANUP] done 00:14:15.287 [Pipeline] setCustomBuildProperty 00:14:15.377 [Pipeline] httpRequest 00:14:15.966 [Pipeline] echo 00:14:15.968 Sorcerer 10.211.164.101 is alive 00:14:15.976 [Pipeline] retry 00:14:15.978 [Pipeline] { 00:14:15.998 [Pipeline] httpRequest 00:14:16.002 HttpMethod: GET 00:14:16.003 URL: http://10.211.164.101/packages/jbp_d37d6e8a0abef39b377a5f0531b43b2efbbebf34.tar.gz 00:14:16.003 Sending request to url: http://10.211.164.101/packages/jbp_d37d6e8a0abef39b377a5f0531b43b2efbbebf34.tar.gz 00:14:16.004 Response Code: HTTP/1.1 200 OK 00:14:16.004 Success: Status code 200 is in the accepted range: 200,404 00:14:16.005 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_d37d6e8a0abef39b377a5f0531b43b2efbbebf34.tar.gz 00:14:16.151 [Pipeline] } 00:14:16.172 [Pipeline] // retry 00:14:16.179 [Pipeline] sh 00:14:16.459 + tar --no-same-owner -xf jbp_d37d6e8a0abef39b377a5f0531b43b2efbbebf34.tar.gz 00:14:16.472 [Pipeline] httpRequest 00:14:16.851 [Pipeline] echo 00:14:16.853 Sorcerer 10.211.164.101 is alive 00:14:16.862 [Pipeline] retry 00:14:16.864 [Pipeline] { 00:14:16.877 [Pipeline] httpRequest 00:14:16.881 HttpMethod: GET 00:14:16.882 URL: http://10.211.164.101/packages/spdk_0c2005fb5b168f1451c5df0ec9b1753f607ca3e9.tar.gz 00:14:16.882 Sending request to url: http://10.211.164.101/packages/spdk_0c2005fb5b168f1451c5df0ec9b1753f607ca3e9.tar.gz 00:14:16.883 Response Code: HTTP/1.1 200 OK 00:14:16.884 Success: Status code 200 is in the accepted range: 200,404 00:14:16.885 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_0c2005fb5b168f1451c5df0ec9b1753f607ca3e9.tar.gz 00:14:21.223 [Pipeline] } 00:14:21.240 [Pipeline] // retry 00:14:21.248 [Pipeline] sh 00:14:21.527 + tar --no-same-owner -xf spdk_0c2005fb5b168f1451c5df0ec9b1753f607ca3e9.tar.gz 00:14:24.814 [Pipeline] sh 00:14:25.094 + git -C spdk log --oneline -n5 00:14:25.094 0c2005fb5 bdev: Add spdk_bdev_io_submit API 00:14:25.094 c1ceb4a6c bdev: Add spdk_bdev_io_to_ctx 00:14:25.094 79efc318f bdev: explicitly inline bdev_channel_get_io() 00:14:25.094 e9b861378 lib/iscsi: Fix: Unregister logout timer 00:14:25.094 081f43f2b lib/nvmf: Fix memory leak in nvmf_bdev_ctrlr_unmap 00:14:25.112 [Pipeline] writeFile 00:14:25.127 [Pipeline] sh 00:14:25.406 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:14:25.417 [Pipeline] sh 00:14:25.695 + cat autorun-spdk.conf 00:14:25.695 SPDK_RUN_FUNCTIONAL_TEST=1 00:14:25.695 SPDK_TEST_NVME=1 00:14:25.695 SPDK_TEST_FTL=1 00:14:25.695 SPDK_TEST_ISAL=1 00:14:25.695 SPDK_RUN_ASAN=1 00:14:25.695 SPDK_RUN_UBSAN=1 00:14:25.695 SPDK_TEST_XNVME=1 00:14:25.695 SPDK_TEST_NVME_FDP=1 00:14:25.695 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:25.701 RUN_NIGHTLY=0 00:14:25.702 [Pipeline] } 00:14:25.715 [Pipeline] // stage 00:14:25.728 [Pipeline] stage 00:14:25.729 [Pipeline] { (Run VM) 00:14:25.739 [Pipeline] sh 00:14:26.018 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:14:26.018 + echo 'Start stage prepare_nvme.sh' 00:14:26.018 Start stage prepare_nvme.sh 00:14:26.018 + [[ -n 5 ]] 00:14:26.018 + disk_prefix=ex5 00:14:26.018 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:14:26.019 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:14:26.019 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:14:26.019 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:26.019 ++ SPDK_TEST_NVME=1 00:14:26.019 ++ SPDK_TEST_FTL=1 00:14:26.019 ++ SPDK_TEST_ISAL=1 00:14:26.019 ++ SPDK_RUN_ASAN=1 00:14:26.019 ++ SPDK_RUN_UBSAN=1 00:14:26.019 ++ SPDK_TEST_XNVME=1 00:14:26.019 ++ SPDK_TEST_NVME_FDP=1 00:14:26.019 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:26.019 ++ RUN_NIGHTLY=0 00:14:26.019 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:14:26.019 + nvme_files=() 00:14:26.019 + declare -A nvme_files 00:14:26.019 + backend_dir=/var/lib/libvirt/images/backends 00:14:26.019 + nvme_files['nvme.img']=5G 00:14:26.019 + nvme_files['nvme-cmb.img']=5G 00:14:26.019 + nvme_files['nvme-multi0.img']=4G 00:14:26.019 + nvme_files['nvme-multi1.img']=4G 00:14:26.019 + nvme_files['nvme-multi2.img']=4G 00:14:26.019 + nvme_files['nvme-openstack.img']=8G 00:14:26.019 + nvme_files['nvme-zns.img']=5G 00:14:26.019 + (( SPDK_TEST_NVME_PMR == 1 )) 00:14:26.019 + (( SPDK_TEST_FTL == 1 )) 00:14:26.019 + nvme_files["nvme-ftl.img"]=6G 00:14:26.019 + (( SPDK_TEST_NVME_FDP == 1 )) 00:14:26.019 + nvme_files["nvme-fdp.img"]=1G 00:14:26.019 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:14:26.019 + for nvme in "${!nvme_files[@]}" 00:14:26.019 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:14:26.019 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:14:26.019 + for nvme in "${!nvme_files[@]}" 00:14:26.019 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:14:26.019 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:14:26.019 + for nvme in "${!nvme_files[@]}" 00:14:26.019 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:14:26.019 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:14:26.019 + for nvme in "${!nvme_files[@]}" 00:14:26.019 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:14:26.276 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:14:26.276 + for nvme in "${!nvme_files[@]}" 00:14:26.276 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:14:26.854 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:14:26.854 + for nvme in "${!nvme_files[@]}" 00:14:26.854 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:14:26.854 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:14:26.854 + for nvme in "${!nvme_files[@]}" 00:14:26.854 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:14:26.854 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:14:26.854 + for nvme in "${!nvme_files[@]}" 00:14:26.854 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:14:26.854 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:14:26.854 + for nvme in "${!nvme_files[@]}" 00:14:26.854 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:14:27.420 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:14:27.420 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:14:27.420 + echo 'End stage prepare_nvme.sh' 00:14:27.420 End stage prepare_nvme.sh 00:14:27.430 [Pipeline] sh 00:14:27.707 + DISTRO=fedora39 00:14:27.707 + CPUS=10 00:14:27.707 + RAM=12288 00:14:27.707 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:14:27.707 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:14:27.707 00:14:27.707 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:14:27.707 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:14:27.707 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:14:27.707 HELP=0 00:14:27.707 DRY_RUN=0 00:14:27.707 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:14:27.707 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:14:27.707 NVME_AUTO_CREATE=0 00:14:27.707 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:14:27.707 NVME_CMB=,,,, 00:14:27.707 NVME_PMR=,,,, 00:14:27.707 NVME_ZNS=,,,, 00:14:27.707 NVME_MS=true,,,, 00:14:27.707 NVME_FDP=,,,on, 00:14:27.707 SPDK_VAGRANT_DISTRO=fedora39 00:14:27.707 SPDK_VAGRANT_VMCPU=10 00:14:27.707 SPDK_VAGRANT_VMRAM=12288 00:14:27.707 SPDK_VAGRANT_PROVIDER=libvirt 00:14:27.707 SPDK_VAGRANT_HTTP_PROXY= 00:14:27.707 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:14:27.707 SPDK_OPENSTACK_NETWORK=0 00:14:27.707 VAGRANT_PACKAGE_BOX=0 00:14:27.707 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:14:27.707 FORCE_DISTRO=true 00:14:27.707 VAGRANT_BOX_VERSION= 00:14:27.707 EXTRA_VAGRANTFILES= 00:14:27.707 NIC_MODEL=e1000 00:14:27.707 00:14:27.707 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:14:27.707 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:14:30.235 Bringing machine 'default' up with 'libvirt' provider... 00:14:30.876 ==> default: Creating image (snapshot of base box volume). 00:14:30.876 ==> default: Creating domain with the following settings... 00:14:30.876 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727813545_e9278f7c87d4d1dfee82 00:14:30.876 ==> default: -- Domain type: kvm 00:14:30.876 ==> default: -- Cpus: 10 00:14:30.876 ==> default: -- Feature: acpi 00:14:30.876 ==> default: -- Feature: apic 00:14:30.876 ==> default: -- Feature: pae 00:14:30.876 ==> default: -- Memory: 12288M 00:14:30.876 ==> default: -- Memory Backing: hugepages: 00:14:30.876 ==> default: -- Management MAC: 00:14:30.876 ==> default: -- Loader: 00:14:30.876 ==> default: -- Nvram: 00:14:30.876 ==> default: -- Base box: spdk/fedora39 00:14:30.876 ==> default: -- Storage pool: default 00:14:30.876 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727813545_e9278f7c87d4d1dfee82.img (20G) 00:14:30.876 ==> default: -- Volume Cache: default 00:14:30.876 ==> default: -- Kernel: 00:14:30.876 ==> default: -- Initrd: 00:14:30.876 ==> default: -- Graphics Type: vnc 00:14:30.876 ==> default: -- Graphics Port: -1 00:14:30.876 ==> default: -- Graphics IP: 127.0.0.1 00:14:30.876 ==> default: -- Graphics Password: Not defined 00:14:30.876 ==> default: -- Video Type: cirrus 00:14:30.876 ==> default: -- Video VRAM: 9216 00:14:30.876 ==> default: -- Sound Type: 00:14:30.876 ==> default: -- Keymap: en-us 00:14:30.876 ==> default: -- TPM Path: 00:14:30.876 ==> default: -- INPUT: type=mouse, bus=ps2 00:14:30.876 ==> default: -- Command line args: 00:14:30.876 ==> default: -> value=-device, 00:14:30.876 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:14:30.876 ==> default: -> value=-drive, 00:14:30.876 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:14:30.876 ==> default: -> value=-device, 00:14:30.876 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:14:30.876 ==> default: -> value=-device, 00:14:30.876 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:14:30.876 ==> default: -> value=-drive, 00:14:30.876 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:14:30.876 ==> default: -> value=-device, 00:14:30.876 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:30.876 ==> default: -> value=-device, 00:14:30.876 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:14:30.876 ==> default: -> value=-drive, 00:14:30.876 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:14:30.876 ==> default: -> value=-device, 00:14:30.876 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:30.876 ==> default: -> value=-drive, 00:14:30.876 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:14:30.876 ==> default: -> value=-device, 00:14:30.876 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:30.876 ==> default: -> value=-drive, 00:14:30.876 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:14:30.876 ==> default: -> value=-device, 00:14:30.876 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:30.876 ==> default: -> value=-device, 00:14:30.876 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:14:30.876 ==> default: -> value=-device, 00:14:30.876 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:14:30.876 ==> default: -> value=-drive, 00:14:30.876 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:14:30.876 ==> default: -> value=-device, 00:14:30.876 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:30.876 ==> default: Creating shared folders metadata... 00:14:30.876 ==> default: Starting domain. 00:14:32.252 ==> default: Waiting for domain to get an IP address... 00:14:47.131 ==> default: Waiting for SSH to become available... 00:14:47.131 ==> default: Configuring and enabling network interfaces... 00:14:49.656 default: SSH address: 192.168.121.97:22 00:14:49.656 default: SSH username: vagrant 00:14:49.656 default: SSH auth method: private key 00:14:51.551 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:14:58.198 ==> default: Mounting SSHFS shared folder... 00:14:59.133 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:14:59.133 ==> default: Checking Mount.. 00:15:00.069 ==> default: Folder Successfully Mounted! 00:15:00.069 00:15:00.069 SUCCESS! 00:15:00.069 00:15:00.069 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:15:00.069 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:15:00.069 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:15:00.069 00:15:00.077 [Pipeline] } 00:15:00.094 [Pipeline] // stage 00:15:00.104 [Pipeline] dir 00:15:00.104 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:15:00.106 [Pipeline] { 00:15:00.119 [Pipeline] catchError 00:15:00.121 [Pipeline] { 00:15:00.135 [Pipeline] sh 00:15:00.413 + vagrant ssh-config --host vagrant 00:15:00.413 + tee ssh_conf 00:15:00.413 + sed -ne '/^Host/,$p' 00:15:03.001 Host vagrant 00:15:03.001 HostName 192.168.121.97 00:15:03.002 User vagrant 00:15:03.002 Port 22 00:15:03.002 UserKnownHostsFile /dev/null 00:15:03.002 StrictHostKeyChecking no 00:15:03.002 PasswordAuthentication no 00:15:03.002 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:15:03.002 IdentitiesOnly yes 00:15:03.002 LogLevel FATAL 00:15:03.002 ForwardAgent yes 00:15:03.002 ForwardX11 yes 00:15:03.002 00:15:03.014 [Pipeline] withEnv 00:15:03.016 [Pipeline] { 00:15:03.030 [Pipeline] sh 00:15:03.307 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:15:03.307 source /etc/os-release 00:15:03.307 [[ -e /image.version ]] && img=$(< /image.version) 00:15:03.307 # Minimal, systemd-like check. 00:15:03.307 if [[ -e /.dockerenv ]]; then 00:15:03.307 # Clear garbage from the node'\''s name: 00:15:03.307 # agt-er_autotest_547-896 -> autotest_547-896 00:15:03.307 # $HOSTNAME is the actual container id 00:15:03.307 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:15:03.307 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:15:03.307 # We can assume this is a mount from a host where container is running, 00:15:03.308 # so fetch its hostname to easily identify the target swarm worker. 00:15:03.308 container="$(< /etc/hostname) ($agent)" 00:15:03.308 else 00:15:03.308 # Fallback 00:15:03.308 container=$agent 00:15:03.308 fi 00:15:03.308 fi 00:15:03.308 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:15:03.308 ' 00:15:03.317 [Pipeline] } 00:15:03.334 [Pipeline] // withEnv 00:15:03.342 [Pipeline] setCustomBuildProperty 00:15:03.358 [Pipeline] stage 00:15:03.360 [Pipeline] { (Tests) 00:15:03.377 [Pipeline] sh 00:15:03.655 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:15:03.667 [Pipeline] sh 00:15:03.943 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:15:03.955 [Pipeline] timeout 00:15:03.955 Timeout set to expire in 50 min 00:15:03.957 [Pipeline] { 00:15:03.970 [Pipeline] sh 00:15:04.259 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:15:04.516 HEAD is now at 0c2005fb5 bdev: Add spdk_bdev_io_submit API 00:15:04.785 [Pipeline] sh 00:15:05.063 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:15:05.078 [Pipeline] sh 00:15:05.363 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:15:05.437 [Pipeline] sh 00:15:05.715 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:15:05.715 ++ readlink -f spdk_repo 00:15:05.715 + DIR_ROOT=/home/vagrant/spdk_repo 00:15:05.715 + [[ -n /home/vagrant/spdk_repo ]] 00:15:05.715 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:15:05.715 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:15:05.715 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:15:05.715 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:15:05.715 + [[ -d /home/vagrant/spdk_repo/output ]] 00:15:05.715 + [[ nvme-vg-autotest == pkgdep-* ]] 00:15:05.715 + cd /home/vagrant/spdk_repo 00:15:05.715 + source /etc/os-release 00:15:05.715 ++ NAME='Fedora Linux' 00:15:05.715 ++ VERSION='39 (Cloud Edition)' 00:15:05.715 ++ ID=fedora 00:15:05.715 ++ VERSION_ID=39 00:15:05.715 ++ VERSION_CODENAME= 00:15:05.715 ++ PLATFORM_ID=platform:f39 00:15:05.715 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:15:05.715 ++ ANSI_COLOR='0;38;2;60;110;180' 00:15:05.715 ++ LOGO=fedora-logo-icon 00:15:05.715 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:15:05.715 ++ HOME_URL=https://fedoraproject.org/ 00:15:05.715 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:15:05.715 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:15:05.715 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:15:05.715 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:15:05.715 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:15:05.715 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:15:05.715 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:15:05.715 ++ SUPPORT_END=2024-11-12 00:15:05.715 ++ VARIANT='Cloud Edition' 00:15:05.715 ++ VARIANT_ID=cloud 00:15:05.715 + uname -a 00:15:05.715 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:15:05.715 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:15:06.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:06.280 Hugepages 00:15:06.280 node hugesize free / total 00:15:06.280 node0 1048576kB 0 / 0 00:15:06.280 node0 2048kB 0 / 0 00:15:06.280 00:15:06.280 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:06.280 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:15:06.280 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:15:06.539 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:15:06.539 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:15:06.539 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:15:06.539 + rm -f /tmp/spdk-ld-path 00:15:06.539 + source autorun-spdk.conf 00:15:06.539 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:15:06.539 ++ SPDK_TEST_NVME=1 00:15:06.539 ++ SPDK_TEST_FTL=1 00:15:06.539 ++ SPDK_TEST_ISAL=1 00:15:06.539 ++ SPDK_RUN_ASAN=1 00:15:06.539 ++ SPDK_RUN_UBSAN=1 00:15:06.539 ++ SPDK_TEST_XNVME=1 00:15:06.539 ++ SPDK_TEST_NVME_FDP=1 00:15:06.539 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:15:06.539 ++ RUN_NIGHTLY=0 00:15:06.539 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:15:06.539 + [[ -n '' ]] 00:15:06.539 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:15:06.539 + for M in /var/spdk/build-*-manifest.txt 00:15:06.539 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:15:06.539 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:15:06.539 + for M in /var/spdk/build-*-manifest.txt 00:15:06.539 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:15:06.539 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:15:06.539 + for M in /var/spdk/build-*-manifest.txt 00:15:06.539 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:15:06.539 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:15:06.539 ++ uname 00:15:06.539 + [[ Linux == \L\i\n\u\x ]] 00:15:06.539 + sudo dmesg -T 00:15:06.539 + sudo dmesg --clear 00:15:06.539 + dmesg_pid=5027 00:15:06.539 + [[ Fedora Linux == FreeBSD ]] 00:15:06.539 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:06.539 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:06.539 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:15:06.539 + [[ -x /usr/src/fio-static/fio ]] 00:15:06.539 + sudo dmesg -Tw 00:15:06.539 + export FIO_BIN=/usr/src/fio-static/fio 00:15:06.539 + FIO_BIN=/usr/src/fio-static/fio 00:15:06.539 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:15:06.539 + [[ ! -v VFIO_QEMU_BIN ]] 00:15:06.539 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:15:06.539 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:06.539 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:06.539 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:15:06.539 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:06.539 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:06.539 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:15:06.539 Test configuration: 00:15:06.539 SPDK_RUN_FUNCTIONAL_TEST=1 00:15:06.539 SPDK_TEST_NVME=1 00:15:06.539 SPDK_TEST_FTL=1 00:15:06.539 SPDK_TEST_ISAL=1 00:15:06.539 SPDK_RUN_ASAN=1 00:15:06.539 SPDK_RUN_UBSAN=1 00:15:06.539 SPDK_TEST_XNVME=1 00:15:06.539 SPDK_TEST_NVME_FDP=1 00:15:06.539 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:15:06.539 RUN_NIGHTLY=0 20:13:01 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:15:06.539 20:13:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:06.539 20:13:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:15:06.539 20:13:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:15:06.539 20:13:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.539 20:13:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.540 20:13:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.540 20:13:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.540 20:13:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.540 20:13:01 -- paths/export.sh@5 -- $ export PATH 00:15:06.540 20:13:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.540 20:13:01 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:15:06.540 20:13:01 -- common/autobuild_common.sh@479 -- $ date +%s 00:15:06.540 20:13:01 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727813581.XXXXXX 00:15:06.540 20:13:01 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727813581.4xNdQP 00:15:06.540 20:13:01 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:15:06.540 20:13:01 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:15:06.540 20:13:01 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:15:06.540 20:13:01 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:15:06.540 20:13:01 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:15:06.540 20:13:01 -- common/autobuild_common.sh@495 -- $ get_config_params 00:15:06.540 20:13:01 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:15:06.540 20:13:01 -- common/autotest_common.sh@10 -- $ set +x 00:15:06.540 20:13:01 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:15:06.540 20:13:01 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:15:06.540 20:13:01 -- pm/common@17 -- $ local monitor 00:15:06.540 20:13:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:06.540 20:13:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:06.540 20:13:01 -- pm/common@25 -- $ sleep 1 00:15:06.540 20:13:01 -- pm/common@21 -- $ date +%s 00:15:06.540 20:13:01 -- pm/common@21 -- $ date +%s 00:15:06.540 20:13:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727813581 00:15:06.540 20:13:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727813581 00:15:06.798 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727813581_collect-cpu-load.pm.log 00:15:06.798 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727813581_collect-vmstat.pm.log 00:15:07.730 20:13:02 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:15:07.730 20:13:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:15:07.730 20:13:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:15:07.730 20:13:02 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:15:07.730 20:13:02 -- spdk/autobuild.sh@16 -- $ date -u 00:15:07.730 Tue Oct 1 08:13:02 PM UTC 2024 00:15:07.730 20:13:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:15:07.730 v25.01-pre-26-g0c2005fb5 00:15:07.730 20:13:02 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:15:07.731 20:13:02 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:15:07.731 20:13:02 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:15:07.731 20:13:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:15:07.731 20:13:02 -- common/autotest_common.sh@10 -- $ set +x 00:15:07.731 ************************************ 00:15:07.731 START TEST asan 00:15:07.731 ************************************ 00:15:07.731 using asan 00:15:07.731 20:13:02 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:15:07.731 00:15:07.731 real 0m0.000s 00:15:07.731 user 0m0.000s 00:15:07.731 sys 0m0.000s 00:15:07.731 20:13:02 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:15:07.731 20:13:02 asan -- common/autotest_common.sh@10 -- $ set +x 00:15:07.731 ************************************ 00:15:07.731 END TEST asan 00:15:07.731 ************************************ 00:15:07.731 20:13:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:15:07.731 20:13:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:15:07.731 20:13:02 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:15:07.731 20:13:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:15:07.731 20:13:02 -- common/autotest_common.sh@10 -- $ set +x 00:15:07.731 ************************************ 00:15:07.731 START TEST ubsan 00:15:07.731 ************************************ 00:15:07.731 using ubsan 00:15:07.731 20:13:02 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:15:07.731 00:15:07.731 real 0m0.000s 00:15:07.731 user 0m0.000s 00:15:07.731 sys 0m0.000s 00:15:07.731 20:13:02 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:15:07.731 ************************************ 00:15:07.731 END TEST ubsan 00:15:07.731 ************************************ 00:15:07.731 20:13:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:15:07.731 20:13:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:15:07.731 20:13:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:15:07.731 20:13:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:15:07.731 20:13:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:15:07.731 20:13:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:15:07.731 20:13:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:15:07.731 20:13:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:15:07.731 20:13:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:15:07.731 20:13:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:15:07.731 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:07.731 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:07.989 Using 'verbs' RDMA provider 00:15:19.091 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:15:29.060 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:15:29.060 Creating mk/config.mk...done. 00:15:29.060 Creating mk/cc.flags.mk...done. 00:15:29.060 Type 'make' to build. 00:15:29.060 20:13:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:15:29.060 20:13:23 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:15:29.060 20:13:23 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:15:29.060 20:13:23 -- common/autotest_common.sh@10 -- $ set +x 00:15:29.060 ************************************ 00:15:29.060 START TEST make 00:15:29.060 ************************************ 00:15:29.060 20:13:23 make -- common/autotest_common.sh@1125 -- $ make -j10 00:15:29.060 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:15:29.060 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:15:29.060 meson setup builddir \ 00:15:29.060 -Dwith-libaio=enabled \ 00:15:29.060 -Dwith-liburing=enabled \ 00:15:29.060 -Dwith-libvfn=disabled \ 00:15:29.060 -Dwith-spdk=false && \ 00:15:29.060 meson compile -C builddir && \ 00:15:29.060 cd -) 00:15:29.060 make[1]: Nothing to be done for 'all'. 00:15:30.967 The Meson build system 00:15:30.967 Version: 1.5.0 00:15:30.967 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:15:30.967 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:15:30.967 Build type: native build 00:15:30.967 Project name: xnvme 00:15:30.967 Project version: 0.7.3 00:15:30.967 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:15:30.967 C linker for the host machine: cc ld.bfd 2.40-14 00:15:30.967 Host machine cpu family: x86_64 00:15:30.967 Host machine cpu: x86_64 00:15:30.967 Message: host_machine.system: linux 00:15:30.967 Compiler for C supports arguments -Wno-missing-braces: YES 00:15:30.967 Compiler for C supports arguments -Wno-cast-function-type: YES 00:15:30.967 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:15:30.967 Run-time dependency threads found: YES 00:15:30.967 Has header "setupapi.h" : NO 00:15:30.967 Has header "linux/blkzoned.h" : YES 00:15:30.967 Has header "linux/blkzoned.h" : YES (cached) 00:15:30.967 Has header "libaio.h" : YES 00:15:30.967 Library aio found: YES 00:15:30.967 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:15:30.967 Run-time dependency liburing found: YES 2.2 00:15:30.967 Dependency libvfn skipped: feature with-libvfn disabled 00:15:30.967 Run-time dependency appleframeworks found: NO (tried framework) 00:15:30.967 Run-time dependency appleframeworks found: NO (tried framework) 00:15:30.967 Configuring xnvme_config.h using configuration 00:15:30.967 Configuring xnvme.spec using configuration 00:15:30.967 Run-time dependency bash-completion found: YES 2.11 00:15:30.967 Message: Bash-completions: /usr/share/bash-completion/completions 00:15:30.967 Program cp found: YES (/usr/bin/cp) 00:15:30.967 Has header "winsock2.h" : NO 00:15:30.967 Has header "dbghelp.h" : NO 00:15:30.967 Library rpcrt4 found: NO 00:15:30.967 Library rt found: YES 00:15:30.967 Checking for function "clock_gettime" with dependency -lrt: YES 00:15:30.967 Found CMake: /usr/bin/cmake (3.27.7) 00:15:30.967 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:15:30.967 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:15:30.967 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:15:30.967 Build targets in project: 32 00:15:30.967 00:15:30.967 xnvme 0.7.3 00:15:30.967 00:15:30.967 User defined options 00:15:30.967 with-libaio : enabled 00:15:30.967 with-liburing: enabled 00:15:30.967 with-libvfn : disabled 00:15:30.967 with-spdk : false 00:15:30.967 00:15:30.967 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:15:31.223 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:15:31.223 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:15:31.223 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:15:31.223 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:15:31.223 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:15:31.223 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:15:31.223 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:15:31.223 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:15:31.223 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:15:31.223 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:15:31.223 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:15:31.223 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:15:31.223 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:15:31.223 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:15:31.223 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:15:31.223 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:15:31.223 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:15:31.480 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:15:31.480 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:15:31.480 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:15:31.480 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:15:31.480 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:15:31.480 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:15:31.480 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:15:31.480 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:15:31.480 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:15:31.480 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:15:31.480 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:15:31.480 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:15:31.480 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:15:31.480 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:15:31.480 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:15:31.480 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:15:31.480 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:15:31.480 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:15:31.480 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:15:31.480 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:15:31.480 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:15:31.480 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:15:31.480 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:15:31.480 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:15:31.480 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:15:31.480 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:15:31.480 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:15:31.480 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:15:31.480 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:15:31.480 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:15:31.480 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:15:31.480 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:15:31.480 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:15:31.480 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:15:31.480 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:15:31.480 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:15:31.480 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:15:31.736 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:15:31.736 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:15:31.736 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:15:31.736 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:15:31.736 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:15:31.736 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:15:31.736 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:15:31.736 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:15:31.736 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:15:31.736 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:15:31.736 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:15:31.736 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:15:31.736 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:15:31.736 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:15:31.736 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:15:31.736 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:15:31.736 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:15:31.736 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:15:31.736 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:15:31.736 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:15:31.736 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:15:31.736 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:15:31.736 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:15:31.736 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:15:31.993 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:15:31.993 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:15:31.993 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:15:31.993 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:15:31.993 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:15:31.993 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:15:31.993 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:15:31.993 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:15:31.993 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:15:31.993 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:15:31.993 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:15:31.993 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:15:31.993 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:15:31.993 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:15:31.993 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:15:31.993 [93/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:15:31.993 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:15:31.993 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:15:31.993 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:15:31.993 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:15:31.993 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:15:32.249 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:15:32.249 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:15:32.249 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:15:32.249 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:15:32.249 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:15:32.249 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:15:32.249 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:15:32.249 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:15:32.249 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:15:32.249 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:15:32.249 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:15:32.249 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:15:32.249 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:15:32.249 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:15:32.249 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:15:32.249 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:15:32.249 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:15:32.249 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:15:32.249 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:15:32.249 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:15:32.249 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:15:32.249 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:15:32.249 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:15:32.249 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:15:32.249 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:15:32.249 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:15:32.249 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:15:32.249 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:15:32.249 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:15:32.249 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:15:32.249 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:15:32.249 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:15:32.249 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:15:32.249 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:15:32.506 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:15:32.506 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:15:32.506 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:15:32.506 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:15:32.506 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:15:32.506 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:15:32.506 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:15:32.506 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:15:32.506 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:15:32.506 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:15:32.506 [143/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:15:32.506 [144/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:15:32.506 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:15:32.506 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:15:32.506 [147/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:15:32.506 [148/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:15:32.506 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:15:32.506 [150/203] Linking target lib/libxnvme.so 00:15:32.506 [151/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:15:32.763 [152/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:15:32.763 [153/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:15:32.763 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:15:32.763 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:15:32.763 [156/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:15:32.763 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:15:32.763 [158/203] Compiling C object tools/xdd.p/xdd.c.o 00:15:32.763 [159/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:15:32.763 [160/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:15:32.763 [161/203] Compiling C object tools/lblk.p/lblk.c.o 00:15:32.763 [162/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:15:32.763 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:15:32.763 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:15:32.763 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:15:32.763 [166/203] Compiling C object tools/zoned.p/zoned.c.o 00:15:32.763 [167/203] Compiling C object tools/kvs.p/kvs.c.o 00:15:33.049 [168/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:15:33.049 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:15:33.049 [170/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:15:33.049 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:15:33.049 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:15:33.049 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:15:33.049 [174/203] Linking static target lib/libxnvme.a 00:15:33.049 [175/203] Linking target tests/xnvme_tests_cli 00:15:33.049 [176/203] Linking target tests/xnvme_tests_async_intf 00:15:33.049 [177/203] Linking target tests/xnvme_tests_enum 00:15:33.049 [178/203] Linking target tests/xnvme_tests_buf 00:15:33.049 [179/203] Linking target tests/xnvme_tests_lblk 00:15:33.049 [180/203] Linking target tests/xnvme_tests_ioworker 00:15:33.049 [181/203] Linking target tests/xnvme_tests_scc 00:15:33.049 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:15:33.307 [183/203] Linking target tests/xnvme_tests_xnvme_file 00:15:33.307 [184/203] Linking target tests/xnvme_tests_znd_append 00:15:33.307 [185/203] Linking target tests/xnvme_tests_znd_state 00:15:33.307 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:15:33.307 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:15:33.307 [188/203] Linking target tests/xnvme_tests_map 00:15:33.307 [189/203] Linking target tools/lblk 00:15:33.307 [190/203] Linking target tests/xnvme_tests_kvs 00:15:33.307 [191/203] Linking target tools/xnvme 00:15:33.307 [192/203] Linking target tools/xdd 00:15:33.307 [193/203] Linking target tools/xnvme_file 00:15:33.307 [194/203] Linking target tools/zoned 00:15:33.307 [195/203] Linking target examples/xnvme_hello 00:15:33.307 [196/203] Linking target examples/xnvme_dev 00:15:33.307 [197/203] Linking target tools/kvs 00:15:33.307 [198/203] Linking target examples/zoned_io_async 00:15:33.307 [199/203] Linking target examples/xnvme_single_sync 00:15:33.307 [200/203] Linking target examples/xnvme_io_async 00:15:33.307 [201/203] Linking target examples/xnvme_enum 00:15:33.307 [202/203] Linking target examples/xnvme_single_async 00:15:33.307 [203/203] Linking target examples/zoned_io_sync 00:15:33.307 INFO: autodetecting backend as ninja 00:15:33.307 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:15:33.307 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:15:39.856 The Meson build system 00:15:39.856 Version: 1.5.0 00:15:39.856 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:15:39.856 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:15:39.856 Build type: native build 00:15:39.856 Program cat found: YES (/usr/bin/cat) 00:15:39.856 Project name: DPDK 00:15:39.856 Project version: 24.03.0 00:15:39.856 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:15:39.856 C linker for the host machine: cc ld.bfd 2.40-14 00:15:39.856 Host machine cpu family: x86_64 00:15:39.856 Host machine cpu: x86_64 00:15:39.856 Message: ## Building in Developer Mode ## 00:15:39.856 Program pkg-config found: YES (/usr/bin/pkg-config) 00:15:39.856 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:15:39.856 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:15:39.856 Program python3 found: YES (/usr/bin/python3) 00:15:39.856 Program cat found: YES (/usr/bin/cat) 00:15:39.856 Compiler for C supports arguments -march=native: YES 00:15:39.856 Checking for size of "void *" : 8 00:15:39.857 Checking for size of "void *" : 8 (cached) 00:15:39.857 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:15:39.857 Library m found: YES 00:15:39.857 Library numa found: YES 00:15:39.857 Has header "numaif.h" : YES 00:15:39.857 Library fdt found: NO 00:15:39.857 Library execinfo found: NO 00:15:39.857 Has header "execinfo.h" : YES 00:15:39.857 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:15:39.857 Run-time dependency libarchive found: NO (tried pkgconfig) 00:15:39.857 Run-time dependency libbsd found: NO (tried pkgconfig) 00:15:39.857 Run-time dependency jansson found: NO (tried pkgconfig) 00:15:39.857 Run-time dependency openssl found: YES 3.1.1 00:15:39.857 Run-time dependency libpcap found: YES 1.10.4 00:15:39.857 Has header "pcap.h" with dependency libpcap: YES 00:15:39.857 Compiler for C supports arguments -Wcast-qual: YES 00:15:39.857 Compiler for C supports arguments -Wdeprecated: YES 00:15:39.857 Compiler for C supports arguments -Wformat: YES 00:15:39.857 Compiler for C supports arguments -Wformat-nonliteral: NO 00:15:39.857 Compiler for C supports arguments -Wformat-security: NO 00:15:39.857 Compiler for C supports arguments -Wmissing-declarations: YES 00:15:39.857 Compiler for C supports arguments -Wmissing-prototypes: YES 00:15:39.857 Compiler for C supports arguments -Wnested-externs: YES 00:15:39.857 Compiler for C supports arguments -Wold-style-definition: YES 00:15:39.857 Compiler for C supports arguments -Wpointer-arith: YES 00:15:39.857 Compiler for C supports arguments -Wsign-compare: YES 00:15:39.857 Compiler for C supports arguments -Wstrict-prototypes: YES 00:15:39.857 Compiler for C supports arguments -Wundef: YES 00:15:39.857 Compiler for C supports arguments -Wwrite-strings: YES 00:15:39.857 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:15:39.857 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:15:39.857 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:15:39.857 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:15:39.857 Program objdump found: YES (/usr/bin/objdump) 00:15:39.857 Compiler for C supports arguments -mavx512f: YES 00:15:39.857 Checking if "AVX512 checking" compiles: YES 00:15:39.857 Fetching value of define "__SSE4_2__" : 1 00:15:39.857 Fetching value of define "__AES__" : 1 00:15:39.857 Fetching value of define "__AVX__" : 1 00:15:39.857 Fetching value of define "__AVX2__" : 1 00:15:39.857 Fetching value of define "__AVX512BW__" : 1 00:15:39.857 Fetching value of define "__AVX512CD__" : 1 00:15:39.857 Fetching value of define "__AVX512DQ__" : 1 00:15:39.857 Fetching value of define "__AVX512F__" : 1 00:15:39.857 Fetching value of define "__AVX512VL__" : 1 00:15:39.857 Fetching value of define "__PCLMUL__" : 1 00:15:39.857 Fetching value of define "__RDRND__" : 1 00:15:39.857 Fetching value of define "__RDSEED__" : 1 00:15:39.857 Fetching value of define "__VPCLMULQDQ__" : 1 00:15:39.857 Fetching value of define "__znver1__" : (undefined) 00:15:39.857 Fetching value of define "__znver2__" : (undefined) 00:15:39.857 Fetching value of define "__znver3__" : (undefined) 00:15:39.857 Fetching value of define "__znver4__" : (undefined) 00:15:39.857 Library asan found: YES 00:15:39.857 Compiler for C supports arguments -Wno-format-truncation: YES 00:15:39.857 Message: lib/log: Defining dependency "log" 00:15:39.857 Message: lib/kvargs: Defining dependency "kvargs" 00:15:39.857 Message: lib/telemetry: Defining dependency "telemetry" 00:15:39.857 Library rt found: YES 00:15:39.857 Checking for function "getentropy" : NO 00:15:39.857 Message: lib/eal: Defining dependency "eal" 00:15:39.857 Message: lib/ring: Defining dependency "ring" 00:15:39.857 Message: lib/rcu: Defining dependency "rcu" 00:15:39.857 Message: lib/mempool: Defining dependency "mempool" 00:15:39.857 Message: lib/mbuf: Defining dependency "mbuf" 00:15:39.857 Fetching value of define "__PCLMUL__" : 1 (cached) 00:15:39.857 Fetching value of define "__AVX512F__" : 1 (cached) 00:15:39.857 Fetching value of define "__AVX512BW__" : 1 (cached) 00:15:39.857 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:15:39.857 Fetching value of define "__AVX512VL__" : 1 (cached) 00:15:39.857 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:15:39.857 Compiler for C supports arguments -mpclmul: YES 00:15:39.857 Compiler for C supports arguments -maes: YES 00:15:39.857 Compiler for C supports arguments -mavx512f: YES (cached) 00:15:39.857 Compiler for C supports arguments -mavx512bw: YES 00:15:39.857 Compiler for C supports arguments -mavx512dq: YES 00:15:39.857 Compiler for C supports arguments -mavx512vl: YES 00:15:39.857 Compiler for C supports arguments -mvpclmulqdq: YES 00:15:39.857 Compiler for C supports arguments -mavx2: YES 00:15:39.857 Compiler for C supports arguments -mavx: YES 00:15:39.857 Message: lib/net: Defining dependency "net" 00:15:39.857 Message: lib/meter: Defining dependency "meter" 00:15:39.857 Message: lib/ethdev: Defining dependency "ethdev" 00:15:39.857 Message: lib/pci: Defining dependency "pci" 00:15:39.857 Message: lib/cmdline: Defining dependency "cmdline" 00:15:39.857 Message: lib/hash: Defining dependency "hash" 00:15:39.857 Message: lib/timer: Defining dependency "timer" 00:15:39.857 Message: lib/compressdev: Defining dependency "compressdev" 00:15:39.857 Message: lib/cryptodev: Defining dependency "cryptodev" 00:15:39.857 Message: lib/dmadev: Defining dependency "dmadev" 00:15:39.857 Compiler for C supports arguments -Wno-cast-qual: YES 00:15:39.857 Message: lib/power: Defining dependency "power" 00:15:39.857 Message: lib/reorder: Defining dependency "reorder" 00:15:39.857 Message: lib/security: Defining dependency "security" 00:15:39.857 Has header "linux/userfaultfd.h" : YES 00:15:39.857 Has header "linux/vduse.h" : YES 00:15:39.857 Message: lib/vhost: Defining dependency "vhost" 00:15:39.857 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:15:39.857 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:15:39.857 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:15:39.857 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:15:39.857 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:15:39.857 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:15:39.857 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:15:39.857 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:15:39.857 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:15:39.857 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:15:39.857 Program doxygen found: YES (/usr/local/bin/doxygen) 00:15:39.857 Configuring doxy-api-html.conf using configuration 00:15:39.857 Configuring doxy-api-man.conf using configuration 00:15:39.857 Program mandb found: YES (/usr/bin/mandb) 00:15:39.857 Program sphinx-build found: NO 00:15:39.857 Configuring rte_build_config.h using configuration 00:15:39.857 Message: 00:15:39.857 ================= 00:15:39.857 Applications Enabled 00:15:39.857 ================= 00:15:39.857 00:15:39.857 apps: 00:15:39.857 00:15:39.857 00:15:39.857 Message: 00:15:39.857 ================= 00:15:39.857 Libraries Enabled 00:15:39.857 ================= 00:15:39.857 00:15:39.857 libs: 00:15:39.857 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:15:39.857 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:15:39.857 cryptodev, dmadev, power, reorder, security, vhost, 00:15:39.857 00:15:39.857 Message: 00:15:39.857 =============== 00:15:39.857 Drivers Enabled 00:15:39.857 =============== 00:15:39.857 00:15:39.857 common: 00:15:39.857 00:15:39.857 bus: 00:15:39.857 pci, vdev, 00:15:39.857 mempool: 00:15:39.857 ring, 00:15:39.857 dma: 00:15:39.857 00:15:39.857 net: 00:15:39.857 00:15:39.857 crypto: 00:15:39.857 00:15:39.857 compress: 00:15:39.857 00:15:39.857 vdpa: 00:15:39.857 00:15:39.857 00:15:39.857 Message: 00:15:39.857 ================= 00:15:39.857 Content Skipped 00:15:39.857 ================= 00:15:39.857 00:15:39.857 apps: 00:15:39.857 dumpcap: explicitly disabled via build config 00:15:39.857 graph: explicitly disabled via build config 00:15:39.857 pdump: explicitly disabled via build config 00:15:39.857 proc-info: explicitly disabled via build config 00:15:39.857 test-acl: explicitly disabled via build config 00:15:39.857 test-bbdev: explicitly disabled via build config 00:15:39.857 test-cmdline: explicitly disabled via build config 00:15:39.857 test-compress-perf: explicitly disabled via build config 00:15:39.857 test-crypto-perf: explicitly disabled via build config 00:15:39.857 test-dma-perf: explicitly disabled via build config 00:15:39.857 test-eventdev: explicitly disabled via build config 00:15:39.857 test-fib: explicitly disabled via build config 00:15:39.857 test-flow-perf: explicitly disabled via build config 00:15:39.857 test-gpudev: explicitly disabled via build config 00:15:39.857 test-mldev: explicitly disabled via build config 00:15:39.857 test-pipeline: explicitly disabled via build config 00:15:39.857 test-pmd: explicitly disabled via build config 00:15:39.857 test-regex: explicitly disabled via build config 00:15:39.857 test-sad: explicitly disabled via build config 00:15:39.857 test-security-perf: explicitly disabled via build config 00:15:39.857 00:15:39.857 libs: 00:15:39.857 argparse: explicitly disabled via build config 00:15:39.857 metrics: explicitly disabled via build config 00:15:39.857 acl: explicitly disabled via build config 00:15:39.857 bbdev: explicitly disabled via build config 00:15:39.857 bitratestats: explicitly disabled via build config 00:15:39.857 bpf: explicitly disabled via build config 00:15:39.857 cfgfile: explicitly disabled via build config 00:15:39.857 distributor: explicitly disabled via build config 00:15:39.857 efd: explicitly disabled via build config 00:15:39.857 eventdev: explicitly disabled via build config 00:15:39.857 dispatcher: explicitly disabled via build config 00:15:39.857 gpudev: explicitly disabled via build config 00:15:39.857 gro: explicitly disabled via build config 00:15:39.857 gso: explicitly disabled via build config 00:15:39.857 ip_frag: explicitly disabled via build config 00:15:39.857 jobstats: explicitly disabled via build config 00:15:39.857 latencystats: explicitly disabled via build config 00:15:39.857 lpm: explicitly disabled via build config 00:15:39.857 member: explicitly disabled via build config 00:15:39.858 pcapng: explicitly disabled via build config 00:15:39.858 rawdev: explicitly disabled via build config 00:15:39.858 regexdev: explicitly disabled via build config 00:15:39.858 mldev: explicitly disabled via build config 00:15:39.858 rib: explicitly disabled via build config 00:15:39.858 sched: explicitly disabled via build config 00:15:39.858 stack: explicitly disabled via build config 00:15:39.858 ipsec: explicitly disabled via build config 00:15:39.858 pdcp: explicitly disabled via build config 00:15:39.858 fib: explicitly disabled via build config 00:15:39.858 port: explicitly disabled via build config 00:15:39.858 pdump: explicitly disabled via build config 00:15:39.858 table: explicitly disabled via build config 00:15:39.858 pipeline: explicitly disabled via build config 00:15:39.858 graph: explicitly disabled via build config 00:15:39.858 node: explicitly disabled via build config 00:15:39.858 00:15:39.858 drivers: 00:15:39.858 common/cpt: not in enabled drivers build config 00:15:39.858 common/dpaax: not in enabled drivers build config 00:15:39.858 common/iavf: not in enabled drivers build config 00:15:39.858 common/idpf: not in enabled drivers build config 00:15:39.858 common/ionic: not in enabled drivers build config 00:15:39.858 common/mvep: not in enabled drivers build config 00:15:39.858 common/octeontx: not in enabled drivers build config 00:15:39.858 bus/auxiliary: not in enabled drivers build config 00:15:39.858 bus/cdx: not in enabled drivers build config 00:15:39.858 bus/dpaa: not in enabled drivers build config 00:15:39.858 bus/fslmc: not in enabled drivers build config 00:15:39.858 bus/ifpga: not in enabled drivers build config 00:15:39.858 bus/platform: not in enabled drivers build config 00:15:39.858 bus/uacce: not in enabled drivers build config 00:15:39.858 bus/vmbus: not in enabled drivers build config 00:15:39.858 common/cnxk: not in enabled drivers build config 00:15:39.858 common/mlx5: not in enabled drivers build config 00:15:39.858 common/nfp: not in enabled drivers build config 00:15:39.858 common/nitrox: not in enabled drivers build config 00:15:39.858 common/qat: not in enabled drivers build config 00:15:39.858 common/sfc_efx: not in enabled drivers build config 00:15:39.858 mempool/bucket: not in enabled drivers build config 00:15:39.858 mempool/cnxk: not in enabled drivers build config 00:15:39.858 mempool/dpaa: not in enabled drivers build config 00:15:39.858 mempool/dpaa2: not in enabled drivers build config 00:15:39.858 mempool/octeontx: not in enabled drivers build config 00:15:39.858 mempool/stack: not in enabled drivers build config 00:15:39.858 dma/cnxk: not in enabled drivers build config 00:15:39.858 dma/dpaa: not in enabled drivers build config 00:15:39.858 dma/dpaa2: not in enabled drivers build config 00:15:39.858 dma/hisilicon: not in enabled drivers build config 00:15:39.858 dma/idxd: not in enabled drivers build config 00:15:39.858 dma/ioat: not in enabled drivers build config 00:15:39.858 dma/skeleton: not in enabled drivers build config 00:15:39.858 net/af_packet: not in enabled drivers build config 00:15:39.858 net/af_xdp: not in enabled drivers build config 00:15:39.858 net/ark: not in enabled drivers build config 00:15:39.858 net/atlantic: not in enabled drivers build config 00:15:39.858 net/avp: not in enabled drivers build config 00:15:39.858 net/axgbe: not in enabled drivers build config 00:15:39.858 net/bnx2x: not in enabled drivers build config 00:15:39.858 net/bnxt: not in enabled drivers build config 00:15:39.858 net/bonding: not in enabled drivers build config 00:15:39.858 net/cnxk: not in enabled drivers build config 00:15:39.858 net/cpfl: not in enabled drivers build config 00:15:39.858 net/cxgbe: not in enabled drivers build config 00:15:39.858 net/dpaa: not in enabled drivers build config 00:15:39.858 net/dpaa2: not in enabled drivers build config 00:15:39.858 net/e1000: not in enabled drivers build config 00:15:39.858 net/ena: not in enabled drivers build config 00:15:39.858 net/enetc: not in enabled drivers build config 00:15:39.858 net/enetfec: not in enabled drivers build config 00:15:39.858 net/enic: not in enabled drivers build config 00:15:39.858 net/failsafe: not in enabled drivers build config 00:15:39.858 net/fm10k: not in enabled drivers build config 00:15:39.858 net/gve: not in enabled drivers build config 00:15:39.858 net/hinic: not in enabled drivers build config 00:15:39.858 net/hns3: not in enabled drivers build config 00:15:39.858 net/i40e: not in enabled drivers build config 00:15:39.858 net/iavf: not in enabled drivers build config 00:15:39.858 net/ice: not in enabled drivers build config 00:15:39.858 net/idpf: not in enabled drivers build config 00:15:39.858 net/igc: not in enabled drivers build config 00:15:39.858 net/ionic: not in enabled drivers build config 00:15:39.858 net/ipn3ke: not in enabled drivers build config 00:15:39.858 net/ixgbe: not in enabled drivers build config 00:15:39.858 net/mana: not in enabled drivers build config 00:15:39.858 net/memif: not in enabled drivers build config 00:15:39.858 net/mlx4: not in enabled drivers build config 00:15:39.858 net/mlx5: not in enabled drivers build config 00:15:39.858 net/mvneta: not in enabled drivers build config 00:15:39.858 net/mvpp2: not in enabled drivers build config 00:15:39.858 net/netvsc: not in enabled drivers build config 00:15:39.858 net/nfb: not in enabled drivers build config 00:15:39.858 net/nfp: not in enabled drivers build config 00:15:39.858 net/ngbe: not in enabled drivers build config 00:15:39.858 net/null: not in enabled drivers build config 00:15:39.858 net/octeontx: not in enabled drivers build config 00:15:39.858 net/octeon_ep: not in enabled drivers build config 00:15:39.858 net/pcap: not in enabled drivers build config 00:15:39.858 net/pfe: not in enabled drivers build config 00:15:39.858 net/qede: not in enabled drivers build config 00:15:39.858 net/ring: not in enabled drivers build config 00:15:39.858 net/sfc: not in enabled drivers build config 00:15:39.858 net/softnic: not in enabled drivers build config 00:15:39.858 net/tap: not in enabled drivers build config 00:15:39.858 net/thunderx: not in enabled drivers build config 00:15:39.858 net/txgbe: not in enabled drivers build config 00:15:39.858 net/vdev_netvsc: not in enabled drivers build config 00:15:39.858 net/vhost: not in enabled drivers build config 00:15:39.858 net/virtio: not in enabled drivers build config 00:15:39.858 net/vmxnet3: not in enabled drivers build config 00:15:39.858 raw/*: missing internal dependency, "rawdev" 00:15:39.858 crypto/armv8: not in enabled drivers build config 00:15:39.858 crypto/bcmfs: not in enabled drivers build config 00:15:39.858 crypto/caam_jr: not in enabled drivers build config 00:15:39.858 crypto/ccp: not in enabled drivers build config 00:15:39.858 crypto/cnxk: not in enabled drivers build config 00:15:39.858 crypto/dpaa_sec: not in enabled drivers build config 00:15:39.858 crypto/dpaa2_sec: not in enabled drivers build config 00:15:39.858 crypto/ipsec_mb: not in enabled drivers build config 00:15:39.858 crypto/mlx5: not in enabled drivers build config 00:15:39.858 crypto/mvsam: not in enabled drivers build config 00:15:39.858 crypto/nitrox: not in enabled drivers build config 00:15:39.858 crypto/null: not in enabled drivers build config 00:15:39.858 crypto/octeontx: not in enabled drivers build config 00:15:39.858 crypto/openssl: not in enabled drivers build config 00:15:39.858 crypto/scheduler: not in enabled drivers build config 00:15:39.858 crypto/uadk: not in enabled drivers build config 00:15:39.858 crypto/virtio: not in enabled drivers build config 00:15:39.858 compress/isal: not in enabled drivers build config 00:15:39.858 compress/mlx5: not in enabled drivers build config 00:15:39.858 compress/nitrox: not in enabled drivers build config 00:15:39.858 compress/octeontx: not in enabled drivers build config 00:15:39.858 compress/zlib: not in enabled drivers build config 00:15:39.858 regex/*: missing internal dependency, "regexdev" 00:15:39.858 ml/*: missing internal dependency, "mldev" 00:15:39.858 vdpa/ifc: not in enabled drivers build config 00:15:39.858 vdpa/mlx5: not in enabled drivers build config 00:15:39.858 vdpa/nfp: not in enabled drivers build config 00:15:39.858 vdpa/sfc: not in enabled drivers build config 00:15:39.858 event/*: missing internal dependency, "eventdev" 00:15:39.858 baseband/*: missing internal dependency, "bbdev" 00:15:39.858 gpu/*: missing internal dependency, "gpudev" 00:15:39.858 00:15:39.858 00:15:39.858 Build targets in project: 84 00:15:39.858 00:15:39.858 DPDK 24.03.0 00:15:39.858 00:15:39.858 User defined options 00:15:39.858 buildtype : debug 00:15:39.858 default_library : shared 00:15:39.858 libdir : lib 00:15:39.858 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:39.858 b_sanitize : address 00:15:39.858 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:15:39.858 c_link_args : 00:15:39.858 cpu_instruction_set: native 00:15:39.858 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:15:39.858 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:15:39.858 enable_docs : false 00:15:39.858 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:15:39.858 enable_kmods : false 00:15:39.858 max_lcores : 128 00:15:39.858 tests : false 00:15:39.858 00:15:39.858 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:15:39.858 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:15:39.858 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:15:39.858 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:15:39.858 [3/267] Linking static target lib/librte_kvargs.a 00:15:39.858 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:15:39.858 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:15:39.858 [6/267] Linking static target lib/librte_log.a 00:15:40.180 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:15:40.180 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:15:40.180 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:15:40.180 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:15:40.180 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:15:40.180 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:15:40.180 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:15:40.436 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:15:40.436 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:15:40.436 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:15:40.436 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:15:40.436 [18/267] Linking static target lib/librte_telemetry.a 00:15:40.693 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:15:40.693 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:15:40.693 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:15:40.693 [22/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:15:40.693 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:15:40.693 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:15:40.693 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:15:40.693 [26/267] Linking target lib/librte_log.so.24.1 00:15:40.693 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:15:40.693 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:15:40.693 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:15:40.950 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:15:40.950 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:15:40.950 [32/267] Linking target lib/librte_kvargs.so.24.1 00:15:40.950 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:15:40.950 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:15:40.950 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:15:41.206 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:15:41.206 [37/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:15:41.206 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:15:41.206 [39/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:15:41.206 [40/267] Linking target lib/librte_telemetry.so.24.1 00:15:41.206 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:15:41.206 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:15:41.206 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:15:41.206 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:15:41.464 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:15:41.464 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:15:41.464 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:15:41.464 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:15:41.464 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:15:41.721 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:15:41.721 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:15:41.721 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:15:41.721 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:15:41.721 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:15:41.721 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:15:41.721 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:15:41.721 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:15:41.979 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:15:41.979 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:15:41.979 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:15:41.979 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:15:41.979 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:15:41.979 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:15:41.979 [64/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:15:42.236 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:15:42.236 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:15:42.236 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:15:42.236 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:15:42.236 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:15:42.493 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:15:42.493 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:15:42.493 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:15:42.493 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:15:42.493 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:15:42.493 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:15:42.493 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:15:42.493 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:15:42.493 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:15:42.750 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:15:42.750 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:15:42.750 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:15:42.750 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:15:42.750 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:15:43.008 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:15:43.008 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:15:43.008 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:15:43.008 [87/267] Linking static target lib/librte_ring.a 00:15:43.008 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:15:43.008 [89/267] Linking static target lib/librte_eal.a 00:15:43.008 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:15:43.008 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:15:43.265 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:15:43.265 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:15:43.265 [94/267] Linking static target lib/librte_mempool.a 00:15:43.265 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:15:43.265 [96/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:15:43.265 [97/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:15:43.521 [98/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:15:43.521 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:15:43.521 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:15:43.521 [101/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:15:43.521 [102/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:15:43.521 [103/267] Linking static target lib/librte_rcu.a 00:15:43.521 [104/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:15:43.521 [105/267] Linking static target lib/librte_meter.a 00:15:43.521 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:15:43.521 [107/267] Linking static target lib/librte_net.a 00:15:43.850 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:15:43.850 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:15:43.850 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:15:43.850 [111/267] Linking static target lib/librte_mbuf.a 00:15:43.850 [112/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:15:43.850 [113/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:15:43.850 [114/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:15:43.850 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:15:44.106 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:15:44.106 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:15:44.106 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:15:44.363 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:15:44.363 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:15:44.619 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:15:44.619 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:15:44.619 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:15:44.619 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:15:44.619 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:15:44.619 [126/267] Linking static target lib/librte_pci.a 00:15:44.619 [127/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:15:44.619 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:15:44.619 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:15:44.876 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:15:44.877 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:15:44.877 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:15:44.877 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:15:44.877 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:15:44.877 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:15:44.877 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:15:44.877 [137/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:44.877 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:15:44.877 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:15:45.133 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:15:45.133 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:15:45.133 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:15:45.133 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:15:45.134 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:15:45.134 [145/267] Linking static target lib/librte_cmdline.a 00:15:45.134 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:15:45.391 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:15:45.391 [148/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:15:45.391 [149/267] Linking static target lib/librte_timer.a 00:15:45.391 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:15:45.391 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:15:45.648 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:15:45.648 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:15:45.648 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:15:45.906 [155/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:15:45.906 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:15:45.906 [157/267] Linking static target lib/librte_hash.a 00:15:45.906 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:15:45.906 [159/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:15:45.906 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:15:45.906 [161/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:15:45.906 [162/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:15:45.906 [163/267] Linking static target lib/librte_ethdev.a 00:15:46.165 [164/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:15:46.165 [165/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:15:46.165 [166/267] Linking static target lib/librte_compressdev.a 00:15:46.165 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:15:46.165 [168/267] Linking static target lib/librte_dmadev.a 00:15:46.423 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:15:46.423 [170/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:15:46.423 [171/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:15:46.423 [172/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:15:46.423 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:15:46.682 [174/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:15:46.682 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:15:46.682 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:15:46.682 [177/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:15:46.682 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:15:46.682 [179/267] Linking static target lib/librte_cryptodev.a 00:15:46.682 [180/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:15:46.940 [181/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:46.940 [182/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:46.940 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:15:46.940 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:15:46.940 [185/267] Linking static target lib/librte_power.a 00:15:47.198 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:15:47.198 [187/267] Linking static target lib/librte_reorder.a 00:15:47.198 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:15:47.198 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:15:47.198 [190/267] Linking static target lib/librte_security.a 00:15:47.198 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:15:47.456 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:15:47.456 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:15:47.721 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:15:47.721 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:15:47.721 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:15:48.032 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:15:48.032 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:15:48.032 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:15:48.032 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:15:48.289 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:15:48.289 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:15:48.289 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:15:48.289 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:15:48.289 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:15:48.547 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:15:48.547 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:15:48.547 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:15:48.547 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:15:48.547 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:48.806 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:15:48.806 [212/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:48.806 [213/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:48.806 [214/267] Linking static target drivers/librte_bus_pci.a 00:15:48.806 [215/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:15:48.806 [216/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:15:48.806 [217/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:15:48.806 [218/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:48.806 [219/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:48.806 [220/267] Linking static target drivers/librte_bus_vdev.a 00:15:48.806 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:15:49.064 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:49.064 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:49.064 [224/267] Linking static target drivers/librte_mempool_ring.a 00:15:49.064 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:49.064 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:49.321 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:15:50.692 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:15:50.692 [229/267] Linking target lib/librte_eal.so.24.1 00:15:50.692 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:15:50.692 [231/267] Linking target lib/librte_meter.so.24.1 00:15:50.692 [232/267] Linking target lib/librte_timer.so.24.1 00:15:50.692 [233/267] Linking target lib/librte_ring.so.24.1 00:15:50.692 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:15:50.692 [235/267] Linking target lib/librte_pci.so.24.1 00:15:50.692 [236/267] Linking target lib/librte_dmadev.so.24.1 00:15:50.692 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:15:50.692 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:15:50.950 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:15:50.950 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:15:50.950 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:15:50.950 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:15:50.950 [243/267] Linking target lib/librte_mempool.so.24.1 00:15:50.950 [244/267] Linking target lib/librte_rcu.so.24.1 00:15:50.950 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:15:50.950 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:15:50.950 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:15:50.950 [248/267] Linking target lib/librte_mbuf.so.24.1 00:15:51.208 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:15:51.208 [250/267] Linking target lib/librte_reorder.so.24.1 00:15:51.208 [251/267] Linking target lib/librte_cryptodev.so.24.1 00:15:51.208 [252/267] Linking target lib/librte_compressdev.so.24.1 00:15:51.208 [253/267] Linking target lib/librte_net.so.24.1 00:15:51.208 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:15:51.208 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:15:51.208 [256/267] Linking target lib/librte_hash.so.24.1 00:15:51.208 [257/267] Linking target lib/librte_security.so.24.1 00:15:51.208 [258/267] Linking target lib/librte_cmdline.so.24.1 00:15:51.468 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:15:51.468 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:51.468 [261/267] Linking target lib/librte_ethdev.so.24.1 00:15:51.725 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:15:51.725 [263/267] Linking target lib/librte_power.so.24.1 00:15:52.291 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:15:52.291 [265/267] Linking static target lib/librte_vhost.a 00:15:53.223 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:15:53.481 [267/267] Linking target lib/librte_vhost.so.24.1 00:15:53.481 INFO: autodetecting backend as ninja 00:15:53.481 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:16:08.391 CC lib/log/log_flags.o 00:16:08.391 CC lib/ut/ut.o 00:16:08.391 CC lib/log/log.o 00:16:08.391 CC lib/log/log_deprecated.o 00:16:08.391 CC lib/ut_mock/mock.o 00:16:08.391 LIB libspdk_ut_mock.a 00:16:08.391 LIB libspdk_ut.a 00:16:08.391 LIB libspdk_log.a 00:16:08.391 SO libspdk_ut.so.2.0 00:16:08.391 SO libspdk_ut_mock.so.6.0 00:16:08.391 SO libspdk_log.so.7.0 00:16:08.391 SYMLINK libspdk_ut_mock.so 00:16:08.391 SYMLINK libspdk_ut.so 00:16:08.391 SYMLINK libspdk_log.so 00:16:08.391 CC lib/dma/dma.o 00:16:08.391 CC lib/ioat/ioat.o 00:16:08.391 CC lib/util/base64.o 00:16:08.391 CC lib/util/bit_array.o 00:16:08.391 CC lib/util/cpuset.o 00:16:08.391 CC lib/util/crc32.o 00:16:08.391 CC lib/util/crc16.o 00:16:08.391 CC lib/util/crc32c.o 00:16:08.391 CXX lib/trace_parser/trace.o 00:16:08.391 CC lib/vfio_user/host/vfio_user_pci.o 00:16:08.391 CC lib/util/crc32_ieee.o 00:16:08.391 CC lib/util/crc64.o 00:16:08.391 CC lib/util/dif.o 00:16:08.391 CC lib/util/fd.o 00:16:08.391 LIB libspdk_dma.a 00:16:08.391 CC lib/util/fd_group.o 00:16:08.391 SO libspdk_dma.so.5.0 00:16:08.391 CC lib/util/file.o 00:16:08.391 CC lib/util/hexlify.o 00:16:08.391 CC lib/vfio_user/host/vfio_user.o 00:16:08.391 SYMLINK libspdk_dma.so 00:16:08.391 CC lib/util/iov.o 00:16:08.391 CC lib/util/math.o 00:16:08.391 LIB libspdk_ioat.a 00:16:08.391 SO libspdk_ioat.so.7.0 00:16:08.391 CC lib/util/net.o 00:16:08.391 CC lib/util/pipe.o 00:16:08.391 SYMLINK libspdk_ioat.so 00:16:08.391 CC lib/util/strerror_tls.o 00:16:08.391 CC lib/util/string.o 00:16:08.391 CC lib/util/uuid.o 00:16:08.391 CC lib/util/xor.o 00:16:08.391 LIB libspdk_vfio_user.a 00:16:08.391 CC lib/util/zipf.o 00:16:08.391 SO libspdk_vfio_user.so.5.0 00:16:08.391 CC lib/util/md5.o 00:16:08.391 SYMLINK libspdk_vfio_user.so 00:16:08.391 LIB libspdk_util.a 00:16:08.391 SO libspdk_util.so.10.0 00:16:08.391 LIB libspdk_trace_parser.a 00:16:08.391 SYMLINK libspdk_util.so 00:16:08.391 SO libspdk_trace_parser.so.6.0 00:16:08.391 SYMLINK libspdk_trace_parser.so 00:16:08.391 CC lib/env_dpdk/env.o 00:16:08.391 CC lib/rdma_provider/common.o 00:16:08.391 CC lib/rdma_provider/rdma_provider_verbs.o 00:16:08.391 CC lib/env_dpdk/memory.o 00:16:08.391 CC lib/env_dpdk/pci.o 00:16:08.391 CC lib/conf/conf.o 00:16:08.391 CC lib/json/json_parse.o 00:16:08.391 CC lib/vmd/vmd.o 00:16:08.391 CC lib/idxd/idxd.o 00:16:08.391 CC lib/rdma_utils/rdma_utils.o 00:16:08.391 CC lib/env_dpdk/init.o 00:16:08.391 LIB libspdk_rdma_provider.a 00:16:08.391 SO libspdk_rdma_provider.so.6.0 00:16:08.391 LIB libspdk_conf.a 00:16:08.391 CC lib/json/json_util.o 00:16:08.391 SO libspdk_conf.so.6.0 00:16:08.391 SYMLINK libspdk_rdma_provider.so 00:16:08.391 CC lib/idxd/idxd_user.o 00:16:08.391 LIB libspdk_rdma_utils.a 00:16:08.391 SYMLINK libspdk_conf.so 00:16:08.391 SO libspdk_rdma_utils.so.1.0 00:16:08.391 CC lib/idxd/idxd_kernel.o 00:16:08.391 SYMLINK libspdk_rdma_utils.so 00:16:08.391 CC lib/json/json_write.o 00:16:08.391 CC lib/env_dpdk/threads.o 00:16:08.391 CC lib/env_dpdk/pci_ioat.o 00:16:08.391 CC lib/env_dpdk/pci_virtio.o 00:16:08.391 CC lib/env_dpdk/pci_vmd.o 00:16:08.391 CC lib/env_dpdk/pci_idxd.o 00:16:08.391 CC lib/env_dpdk/pci_event.o 00:16:08.391 CC lib/env_dpdk/sigbus_handler.o 00:16:08.391 CC lib/env_dpdk/pci_dpdk.o 00:16:08.391 CC lib/env_dpdk/pci_dpdk_2207.o 00:16:08.391 CC lib/vmd/led.o 00:16:08.391 LIB libspdk_json.a 00:16:08.391 CC lib/env_dpdk/pci_dpdk_2211.o 00:16:08.391 LIB libspdk_idxd.a 00:16:08.391 SO libspdk_json.so.6.0 00:16:08.391 SO libspdk_idxd.so.12.1 00:16:08.391 SYMLINK libspdk_json.so 00:16:08.391 LIB libspdk_vmd.a 00:16:08.391 SYMLINK libspdk_idxd.so 00:16:08.391 SO libspdk_vmd.so.6.0 00:16:08.391 SYMLINK libspdk_vmd.so 00:16:08.391 CC lib/jsonrpc/jsonrpc_server.o 00:16:08.391 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:16:08.391 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:16:08.391 CC lib/jsonrpc/jsonrpc_client.o 00:16:08.391 LIB libspdk_jsonrpc.a 00:16:08.391 SO libspdk_jsonrpc.so.6.0 00:16:08.391 SYMLINK libspdk_jsonrpc.so 00:16:08.648 LIB libspdk_env_dpdk.a 00:16:08.648 SO libspdk_env_dpdk.so.15.0 00:16:08.648 CC lib/rpc/rpc.o 00:16:08.648 SYMLINK libspdk_env_dpdk.so 00:16:08.904 LIB libspdk_rpc.a 00:16:08.904 SO libspdk_rpc.so.6.0 00:16:08.904 SYMLINK libspdk_rpc.so 00:16:09.162 CC lib/keyring/keyring.o 00:16:09.162 CC lib/trace/trace_flags.o 00:16:09.162 CC lib/keyring/keyring_rpc.o 00:16:09.162 CC lib/trace/trace_rpc.o 00:16:09.162 CC lib/trace/trace.o 00:16:09.162 CC lib/notify/notify_rpc.o 00:16:09.162 CC lib/notify/notify.o 00:16:09.419 LIB libspdk_notify.a 00:16:09.419 LIB libspdk_keyring.a 00:16:09.419 SO libspdk_notify.so.6.0 00:16:09.419 SO libspdk_keyring.so.2.0 00:16:09.419 LIB libspdk_trace.a 00:16:09.419 SO libspdk_trace.so.11.0 00:16:09.419 SYMLINK libspdk_keyring.so 00:16:09.419 SYMLINK libspdk_notify.so 00:16:09.419 SYMLINK libspdk_trace.so 00:16:09.678 CC lib/thread/thread.o 00:16:09.678 CC lib/thread/iobuf.o 00:16:09.678 CC lib/sock/sock_rpc.o 00:16:09.678 CC lib/sock/sock.o 00:16:09.936 LIB libspdk_sock.a 00:16:09.936 SO libspdk_sock.so.10.0 00:16:10.194 SYMLINK libspdk_sock.so 00:16:10.194 CC lib/nvme/nvme_fabric.o 00:16:10.194 CC lib/nvme/nvme_ctrlr_cmd.o 00:16:10.194 CC lib/nvme/nvme_ns_cmd.o 00:16:10.194 CC lib/nvme/nvme_ctrlr.o 00:16:10.194 CC lib/nvme/nvme_pcie_common.o 00:16:10.194 CC lib/nvme/nvme_pcie.o 00:16:10.194 CC lib/nvme/nvme_ns.o 00:16:10.194 CC lib/nvme/nvme.o 00:16:10.194 CC lib/nvme/nvme_qpair.o 00:16:11.140 CC lib/nvme/nvme_quirks.o 00:16:11.140 CC lib/nvme/nvme_transport.o 00:16:11.140 CC lib/nvme/nvme_discovery.o 00:16:11.140 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:16:11.140 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:16:11.140 CC lib/nvme/nvme_tcp.o 00:16:11.140 CC lib/nvme/nvme_opal.o 00:16:11.140 LIB libspdk_thread.a 00:16:11.428 SO libspdk_thread.so.10.1 00:16:11.428 CC lib/nvme/nvme_io_msg.o 00:16:11.428 CC lib/nvme/nvme_poll_group.o 00:16:11.428 SYMLINK libspdk_thread.so 00:16:11.428 CC lib/nvme/nvme_zns.o 00:16:11.428 CC lib/nvme/nvme_stubs.o 00:16:11.428 CC lib/nvme/nvme_auth.o 00:16:11.428 CC lib/nvme/nvme_cuse.o 00:16:11.699 CC lib/nvme/nvme_rdma.o 00:16:11.699 CC lib/accel/accel.o 00:16:11.699 CC lib/blob/blobstore.o 00:16:11.956 CC lib/init/json_config.o 00:16:11.956 CC lib/init/subsystem.o 00:16:11.956 CC lib/virtio/virtio.o 00:16:11.956 CC lib/virtio/virtio_vhost_user.o 00:16:11.956 CC lib/init/subsystem_rpc.o 00:16:12.214 CC lib/init/rpc.o 00:16:12.214 CC lib/virtio/virtio_vfio_user.o 00:16:12.214 CC lib/virtio/virtio_pci.o 00:16:12.472 LIB libspdk_init.a 00:16:12.472 SO libspdk_init.so.6.0 00:16:12.472 CC lib/fsdev/fsdev.o 00:16:12.472 CC lib/fsdev/fsdev_io.o 00:16:12.472 SYMLINK libspdk_init.so 00:16:12.472 CC lib/blob/request.o 00:16:12.472 CC lib/accel/accel_rpc.o 00:16:12.729 CC lib/accel/accel_sw.o 00:16:12.729 LIB libspdk_virtio.a 00:16:12.729 SO libspdk_virtio.so.7.0 00:16:12.729 SYMLINK libspdk_virtio.so 00:16:12.729 CC lib/fsdev/fsdev_rpc.o 00:16:12.729 CC lib/blob/zeroes.o 00:16:12.729 CC lib/blob/blob_bs_dev.o 00:16:12.729 CC lib/event/app.o 00:16:12.729 CC lib/event/reactor.o 00:16:12.729 CC lib/event/log_rpc.o 00:16:12.729 CC lib/event/app_rpc.o 00:16:13.012 LIB libspdk_nvme.a 00:16:13.012 CC lib/event/scheduler_static.o 00:16:13.012 LIB libspdk_fsdev.a 00:16:13.012 LIB libspdk_accel.a 00:16:13.012 SO libspdk_fsdev.so.1.0 00:16:13.012 SO libspdk_accel.so.16.0 00:16:13.012 SYMLINK libspdk_fsdev.so 00:16:13.012 SYMLINK libspdk_accel.so 00:16:13.012 SO libspdk_nvme.so.14.0 00:16:13.270 CC lib/bdev/bdev.o 00:16:13.270 CC lib/bdev/bdev_rpc.o 00:16:13.270 CC lib/bdev/part.o 00:16:13.270 CC lib/bdev/bdev_zone.o 00:16:13.270 CC lib/bdev/scsi_nvme.o 00:16:13.270 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:16:13.270 LIB libspdk_event.a 00:16:13.270 SYMLINK libspdk_nvme.so 00:16:13.270 SO libspdk_event.so.14.0 00:16:13.270 SYMLINK libspdk_event.so 00:16:13.834 LIB libspdk_fuse_dispatcher.a 00:16:13.834 SO libspdk_fuse_dispatcher.so.1.0 00:16:13.834 SYMLINK libspdk_fuse_dispatcher.so 00:16:14.765 LIB libspdk_blob.a 00:16:15.022 SO libspdk_blob.so.11.0 00:16:15.022 SYMLINK libspdk_blob.so 00:16:15.280 CC lib/blobfs/blobfs.o 00:16:15.280 CC lib/lvol/lvol.o 00:16:15.280 CC lib/blobfs/tree.o 00:16:15.849 LIB libspdk_bdev.a 00:16:15.849 SO libspdk_bdev.so.17.0 00:16:16.107 LIB libspdk_blobfs.a 00:16:16.107 SYMLINK libspdk_bdev.so 00:16:16.107 SO libspdk_blobfs.so.10.0 00:16:16.107 SYMLINK libspdk_blobfs.so 00:16:16.107 CC lib/ftl/ftl_core.o 00:16:16.107 CC lib/nbd/nbd.o 00:16:16.107 CC lib/nbd/nbd_rpc.o 00:16:16.107 CC lib/ftl/ftl_layout.o 00:16:16.107 CC lib/ftl/ftl_debug.o 00:16:16.107 CC lib/ftl/ftl_init.o 00:16:16.107 CC lib/ublk/ublk.o 00:16:16.107 CC lib/nvmf/ctrlr.o 00:16:16.107 CC lib/scsi/dev.o 00:16:16.365 LIB libspdk_lvol.a 00:16:16.365 SO libspdk_lvol.so.10.0 00:16:16.365 CC lib/scsi/lun.o 00:16:16.365 SYMLINK libspdk_lvol.so 00:16:16.365 CC lib/scsi/port.o 00:16:16.366 CC lib/scsi/scsi.o 00:16:16.366 CC lib/scsi/scsi_bdev.o 00:16:16.366 CC lib/scsi/scsi_pr.o 00:16:16.366 CC lib/nvmf/ctrlr_discovery.o 00:16:16.366 CC lib/nvmf/ctrlr_bdev.o 00:16:16.366 CC lib/nvmf/subsystem.o 00:16:16.623 CC lib/nvmf/nvmf.o 00:16:16.623 LIB libspdk_nbd.a 00:16:16.623 SO libspdk_nbd.so.7.0 00:16:16.623 CC lib/ftl/ftl_io.o 00:16:16.623 SYMLINK libspdk_nbd.so 00:16:16.623 CC lib/ftl/ftl_sb.o 00:16:16.623 CC lib/ftl/ftl_l2p.o 00:16:16.881 CC lib/ublk/ublk_rpc.o 00:16:16.881 CC lib/nvmf/nvmf_rpc.o 00:16:16.881 CC lib/nvmf/transport.o 00:16:16.881 CC lib/scsi/scsi_rpc.o 00:16:16.881 CC lib/ftl/ftl_l2p_flat.o 00:16:16.881 CC lib/ftl/ftl_nv_cache.o 00:16:16.881 LIB libspdk_ublk.a 00:16:16.881 SO libspdk_ublk.so.3.0 00:16:16.881 CC lib/scsi/task.o 00:16:17.138 SYMLINK libspdk_ublk.so 00:16:17.138 CC lib/ftl/ftl_band.o 00:16:17.138 CC lib/ftl/ftl_band_ops.o 00:16:17.138 CC lib/nvmf/tcp.o 00:16:17.138 LIB libspdk_scsi.a 00:16:17.138 CC lib/nvmf/stubs.o 00:16:17.138 SO libspdk_scsi.so.9.0 00:16:17.395 SYMLINK libspdk_scsi.so 00:16:17.395 CC lib/nvmf/mdns_server.o 00:16:17.395 CC lib/nvmf/rdma.o 00:16:17.395 CC lib/ftl/ftl_writer.o 00:16:17.652 CC lib/iscsi/conn.o 00:16:17.652 CC lib/iscsi/init_grp.o 00:16:17.652 CC lib/vhost/vhost.o 00:16:17.652 CC lib/vhost/vhost_rpc.o 00:16:17.652 CC lib/vhost/vhost_scsi.o 00:16:17.652 CC lib/vhost/vhost_blk.o 00:16:17.652 CC lib/vhost/rte_vhost_user.o 00:16:17.909 CC lib/ftl/ftl_rq.o 00:16:17.909 CC lib/iscsi/iscsi.o 00:16:17.909 CC lib/nvmf/auth.o 00:16:18.167 CC lib/ftl/ftl_reloc.o 00:16:18.167 CC lib/ftl/ftl_l2p_cache.o 00:16:18.167 CC lib/ftl/ftl_p2l.o 00:16:18.425 CC lib/ftl/ftl_p2l_log.o 00:16:18.683 CC lib/iscsi/param.o 00:16:18.683 CC lib/ftl/mngt/ftl_mngt.o 00:16:18.683 CC lib/iscsi/portal_grp.o 00:16:18.683 CC lib/iscsi/tgt_node.o 00:16:18.683 CC lib/iscsi/iscsi_subsystem.o 00:16:18.683 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:16:18.683 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:16:18.683 LIB libspdk_vhost.a 00:16:18.941 SO libspdk_vhost.so.8.0 00:16:18.941 CC lib/ftl/mngt/ftl_mngt_startup.o 00:16:18.941 CC lib/iscsi/iscsi_rpc.o 00:16:18.941 CC lib/iscsi/task.o 00:16:18.941 CC lib/ftl/mngt/ftl_mngt_md.o 00:16:18.941 SYMLINK libspdk_vhost.so 00:16:18.941 CC lib/ftl/mngt/ftl_mngt_misc.o 00:16:18.941 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:16:18.941 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:16:18.941 CC lib/ftl/mngt/ftl_mngt_band.o 00:16:18.941 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:16:19.198 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:16:19.198 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:16:19.198 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:16:19.198 CC lib/ftl/utils/ftl_conf.o 00:16:19.199 CC lib/ftl/utils/ftl_md.o 00:16:19.199 CC lib/ftl/utils/ftl_mempool.o 00:16:19.199 CC lib/ftl/utils/ftl_bitmap.o 00:16:19.199 CC lib/ftl/utils/ftl_property.o 00:16:19.199 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:16:19.199 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:16:19.456 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:16:19.456 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:16:19.456 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:16:19.456 LIB libspdk_iscsi.a 00:16:19.456 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:16:19.456 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:16:19.456 LIB libspdk_nvmf.a 00:16:19.456 SO libspdk_iscsi.so.8.0 00:16:19.456 CC lib/ftl/upgrade/ftl_sb_v3.o 00:16:19.456 CC lib/ftl/upgrade/ftl_sb_v5.o 00:16:19.456 CC lib/ftl/nvc/ftl_nvc_dev.o 00:16:19.456 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:16:19.456 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:16:19.456 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:16:19.456 CC lib/ftl/base/ftl_base_dev.o 00:16:19.456 SO libspdk_nvmf.so.19.0 00:16:19.714 SYMLINK libspdk_iscsi.so 00:16:19.714 CC lib/ftl/base/ftl_base_bdev.o 00:16:19.714 CC lib/ftl/ftl_trace.o 00:16:19.714 SYMLINK libspdk_nvmf.so 00:16:19.971 LIB libspdk_ftl.a 00:16:19.971 SO libspdk_ftl.so.9.0 00:16:20.230 SYMLINK libspdk_ftl.so 00:16:20.489 CC module/env_dpdk/env_dpdk_rpc.o 00:16:20.489 CC module/sock/posix/posix.o 00:16:20.489 CC module/accel/error/accel_error.o 00:16:20.489 CC module/accel/dsa/accel_dsa.o 00:16:20.489 CC module/accel/ioat/accel_ioat.o 00:16:20.489 CC module/accel/iaa/accel_iaa.o 00:16:20.489 CC module/fsdev/aio/fsdev_aio.o 00:16:20.489 CC module/blob/bdev/blob_bdev.o 00:16:20.489 CC module/keyring/file/keyring.o 00:16:20.489 CC module/scheduler/dynamic/scheduler_dynamic.o 00:16:20.750 LIB libspdk_env_dpdk_rpc.a 00:16:20.750 SO libspdk_env_dpdk_rpc.so.6.0 00:16:20.750 SYMLINK libspdk_env_dpdk_rpc.so 00:16:20.750 CC module/accel/error/accel_error_rpc.o 00:16:20.750 CC module/accel/ioat/accel_ioat_rpc.o 00:16:20.750 CC module/keyring/file/keyring_rpc.o 00:16:20.750 CC module/accel/dsa/accel_dsa_rpc.o 00:16:20.750 LIB libspdk_scheduler_dynamic.a 00:16:20.750 CC module/accel/iaa/accel_iaa_rpc.o 00:16:20.750 SO libspdk_scheduler_dynamic.so.4.0 00:16:20.750 LIB libspdk_accel_error.a 00:16:20.750 LIB libspdk_accel_ioat.a 00:16:20.750 SO libspdk_accel_error.so.2.0 00:16:20.750 SO libspdk_accel_ioat.so.6.0 00:16:20.750 LIB libspdk_keyring_file.a 00:16:20.750 LIB libspdk_blob_bdev.a 00:16:20.750 SYMLINK libspdk_scheduler_dynamic.so 00:16:21.010 LIB libspdk_accel_dsa.a 00:16:21.010 SO libspdk_blob_bdev.so.11.0 00:16:21.010 SO libspdk_keyring_file.so.2.0 00:16:21.010 SO libspdk_accel_dsa.so.5.0 00:16:21.010 LIB libspdk_accel_iaa.a 00:16:21.010 SYMLINK libspdk_accel_error.so 00:16:21.010 SYMLINK libspdk_accel_ioat.so 00:16:21.010 CC module/fsdev/aio/fsdev_aio_rpc.o 00:16:21.010 SYMLINK libspdk_keyring_file.so 00:16:21.010 SO libspdk_accel_iaa.so.3.0 00:16:21.010 SYMLINK libspdk_blob_bdev.so 00:16:21.010 CC module/fsdev/aio/linux_aio_mgr.o 00:16:21.010 SYMLINK libspdk_accel_dsa.so 00:16:21.010 SYMLINK libspdk_accel_iaa.so 00:16:21.010 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:16:21.010 CC module/scheduler/gscheduler/gscheduler.o 00:16:21.010 CC module/keyring/linux/keyring.o 00:16:21.010 LIB libspdk_scheduler_gscheduler.a 00:16:21.010 CC module/keyring/linux/keyring_rpc.o 00:16:21.269 LIB libspdk_scheduler_dpdk_governor.a 00:16:21.269 SO libspdk_scheduler_gscheduler.so.4.0 00:16:21.269 CC module/bdev/error/vbdev_error.o 00:16:21.269 CC module/blobfs/bdev/blobfs_bdev.o 00:16:21.269 CC module/bdev/delay/vbdev_delay.o 00:16:21.269 SO libspdk_scheduler_dpdk_governor.so.4.0 00:16:21.269 CC module/bdev/gpt/gpt.o 00:16:21.269 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:16:21.269 SYMLINK libspdk_scheduler_gscheduler.so 00:16:21.269 CC module/bdev/gpt/vbdev_gpt.o 00:16:21.269 SYMLINK libspdk_scheduler_dpdk_governor.so 00:16:21.269 CC module/bdev/error/vbdev_error_rpc.o 00:16:21.269 LIB libspdk_keyring_linux.a 00:16:21.269 SO libspdk_keyring_linux.so.1.0 00:16:21.269 LIB libspdk_fsdev_aio.a 00:16:21.269 SYMLINK libspdk_keyring_linux.so 00:16:21.269 CC module/bdev/delay/vbdev_delay_rpc.o 00:16:21.269 SO libspdk_fsdev_aio.so.1.0 00:16:21.269 LIB libspdk_blobfs_bdev.a 00:16:21.269 LIB libspdk_sock_posix.a 00:16:21.269 SO libspdk_sock_posix.so.6.0 00:16:21.269 SO libspdk_blobfs_bdev.so.6.0 00:16:21.527 LIB libspdk_bdev_error.a 00:16:21.527 SYMLINK libspdk_fsdev_aio.so 00:16:21.527 LIB libspdk_bdev_gpt.a 00:16:21.527 SO libspdk_bdev_error.so.6.0 00:16:21.527 SYMLINK libspdk_blobfs_bdev.so 00:16:21.527 SO libspdk_bdev_gpt.so.6.0 00:16:21.527 SYMLINK libspdk_bdev_error.so 00:16:21.527 SYMLINK libspdk_sock_posix.so 00:16:21.527 CC module/bdev/lvol/vbdev_lvol.o 00:16:21.527 CC module/bdev/malloc/bdev_malloc.o 00:16:21.527 CC module/bdev/null/bdev_null.o 00:16:21.527 SYMLINK libspdk_bdev_gpt.so 00:16:21.527 CC module/bdev/malloc/bdev_malloc_rpc.o 00:16:21.527 LIB libspdk_bdev_delay.a 00:16:21.527 SO libspdk_bdev_delay.so.6.0 00:16:21.527 CC module/bdev/nvme/bdev_nvme.o 00:16:21.527 CC module/bdev/raid/bdev_raid.o 00:16:21.527 CC module/bdev/split/vbdev_split.o 00:16:21.527 CC module/bdev/passthru/vbdev_passthru.o 00:16:21.527 CC module/bdev/zone_block/vbdev_zone_block.o 00:16:21.527 SYMLINK libspdk_bdev_delay.so 00:16:21.527 CC module/bdev/raid/bdev_raid_rpc.o 00:16:21.788 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:16:21.788 CC module/bdev/null/bdev_null_rpc.o 00:16:21.788 CC module/bdev/raid/bdev_raid_sb.o 00:16:21.788 CC module/bdev/split/vbdev_split_rpc.o 00:16:21.788 LIB libspdk_bdev_passthru.a 00:16:22.046 SO libspdk_bdev_passthru.so.6.0 00:16:22.046 LIB libspdk_bdev_malloc.a 00:16:22.046 LIB libspdk_bdev_null.a 00:16:22.046 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:16:22.046 SO libspdk_bdev_null.so.6.0 00:16:22.046 CC module/bdev/xnvme/bdev_xnvme.o 00:16:22.046 SO libspdk_bdev_malloc.so.6.0 00:16:22.046 SYMLINK libspdk_bdev_passthru.so 00:16:22.046 CC module/bdev/raid/raid0.o 00:16:22.046 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:16:22.046 SYMLINK libspdk_bdev_null.so 00:16:22.046 SYMLINK libspdk_bdev_malloc.so 00:16:22.046 CC module/bdev/raid/raid1.o 00:16:22.046 CC module/bdev/nvme/bdev_nvme_rpc.o 00:16:22.046 LIB libspdk_bdev_split.a 00:16:22.046 SO libspdk_bdev_split.so.6.0 00:16:22.046 CC module/bdev/raid/concat.o 00:16:22.046 LIB libspdk_bdev_zone_block.a 00:16:22.046 SYMLINK libspdk_bdev_split.so 00:16:22.046 SO libspdk_bdev_zone_block.so.6.0 00:16:22.306 SYMLINK libspdk_bdev_zone_block.so 00:16:22.306 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:16:22.306 CC module/bdev/aio/bdev_aio.o 00:16:22.306 CC module/bdev/nvme/nvme_rpc.o 00:16:22.306 CC module/bdev/ftl/bdev_ftl.o 00:16:22.306 CC module/bdev/iscsi/bdev_iscsi.o 00:16:22.306 LIB libspdk_bdev_lvol.a 00:16:22.306 LIB libspdk_bdev_xnvme.a 00:16:22.306 SO libspdk_bdev_lvol.so.6.0 00:16:22.567 SO libspdk_bdev_xnvme.so.3.0 00:16:22.567 CC module/bdev/virtio/bdev_virtio_scsi.o 00:16:22.567 SYMLINK libspdk_bdev_xnvme.so 00:16:22.567 CC module/bdev/aio/bdev_aio_rpc.o 00:16:22.567 CC module/bdev/nvme/bdev_mdns_client.o 00:16:22.567 SYMLINK libspdk_bdev_lvol.so 00:16:22.567 CC module/bdev/nvme/vbdev_opal.o 00:16:22.567 CC module/bdev/nvme/vbdev_opal_rpc.o 00:16:22.567 CC module/bdev/ftl/bdev_ftl_rpc.o 00:16:22.567 LIB libspdk_bdev_raid.a 00:16:22.567 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:16:22.824 LIB libspdk_bdev_aio.a 00:16:22.824 CC module/bdev/virtio/bdev_virtio_blk.o 00:16:22.824 SO libspdk_bdev_raid.so.6.0 00:16:22.824 SO libspdk_bdev_aio.so.6.0 00:16:22.824 LIB libspdk_bdev_ftl.a 00:16:22.824 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:16:22.824 SYMLINK libspdk_bdev_aio.so 00:16:22.824 SYMLINK libspdk_bdev_raid.so 00:16:22.824 SO libspdk_bdev_ftl.so.6.0 00:16:22.824 CC module/bdev/virtio/bdev_virtio_rpc.o 00:16:22.824 SYMLINK libspdk_bdev_ftl.so 00:16:22.824 LIB libspdk_bdev_iscsi.a 00:16:22.824 SO libspdk_bdev_iscsi.so.6.0 00:16:23.085 SYMLINK libspdk_bdev_iscsi.so 00:16:23.085 LIB libspdk_bdev_virtio.a 00:16:23.085 SO libspdk_bdev_virtio.so.6.0 00:16:23.085 SYMLINK libspdk_bdev_virtio.so 00:16:24.053 LIB libspdk_bdev_nvme.a 00:16:24.053 SO libspdk_bdev_nvme.so.7.0 00:16:24.053 SYMLINK libspdk_bdev_nvme.so 00:16:24.619 CC module/event/subsystems/scheduler/scheduler.o 00:16:24.619 CC module/event/subsystems/iobuf/iobuf.o 00:16:24.619 CC module/event/subsystems/vmd/vmd.o 00:16:24.619 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:16:24.619 CC module/event/subsystems/vmd/vmd_rpc.o 00:16:24.619 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:16:24.619 CC module/event/subsystems/keyring/keyring.o 00:16:24.619 CC module/event/subsystems/sock/sock.o 00:16:24.619 CC module/event/subsystems/fsdev/fsdev.o 00:16:24.619 LIB libspdk_event_sock.a 00:16:24.619 LIB libspdk_event_vhost_blk.a 00:16:24.619 LIB libspdk_event_scheduler.a 00:16:24.619 LIB libspdk_event_keyring.a 00:16:24.619 LIB libspdk_event_vmd.a 00:16:24.619 LIB libspdk_event_fsdev.a 00:16:24.619 SO libspdk_event_sock.so.5.0 00:16:24.619 LIB libspdk_event_iobuf.a 00:16:24.619 SO libspdk_event_vhost_blk.so.3.0 00:16:24.619 SO libspdk_event_keyring.so.1.0 00:16:24.619 SO libspdk_event_scheduler.so.4.0 00:16:24.619 SO libspdk_event_fsdev.so.1.0 00:16:24.619 SO libspdk_event_vmd.so.6.0 00:16:24.619 SO libspdk_event_iobuf.so.3.0 00:16:24.619 SYMLINK libspdk_event_sock.so 00:16:24.619 SYMLINK libspdk_event_vhost_blk.so 00:16:24.619 SYMLINK libspdk_event_keyring.so 00:16:24.619 SYMLINK libspdk_event_scheduler.so 00:16:24.619 SYMLINK libspdk_event_fsdev.so 00:16:24.619 SYMLINK libspdk_event_vmd.so 00:16:24.619 SYMLINK libspdk_event_iobuf.so 00:16:24.879 CC module/event/subsystems/accel/accel.o 00:16:25.140 LIB libspdk_event_accel.a 00:16:25.140 SO libspdk_event_accel.so.6.0 00:16:25.140 SYMLINK libspdk_event_accel.so 00:16:25.400 CC module/event/subsystems/bdev/bdev.o 00:16:25.400 LIB libspdk_event_bdev.a 00:16:25.400 SO libspdk_event_bdev.so.6.0 00:16:25.661 SYMLINK libspdk_event_bdev.so 00:16:25.661 CC module/event/subsystems/ublk/ublk.o 00:16:25.661 CC module/event/subsystems/nbd/nbd.o 00:16:25.661 CC module/event/subsystems/scsi/scsi.o 00:16:25.661 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:16:25.661 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:16:25.922 LIB libspdk_event_ublk.a 00:16:25.922 LIB libspdk_event_nbd.a 00:16:25.922 SO libspdk_event_nbd.so.6.0 00:16:25.922 SO libspdk_event_ublk.so.3.0 00:16:25.922 LIB libspdk_event_scsi.a 00:16:25.922 SO libspdk_event_scsi.so.6.0 00:16:25.922 SYMLINK libspdk_event_nbd.so 00:16:25.922 SYMLINK libspdk_event_ublk.so 00:16:25.922 LIB libspdk_event_nvmf.a 00:16:25.922 SYMLINK libspdk_event_scsi.so 00:16:25.922 SO libspdk_event_nvmf.so.6.0 00:16:25.922 SYMLINK libspdk_event_nvmf.so 00:16:26.182 CC module/event/subsystems/iscsi/iscsi.o 00:16:26.182 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:16:26.182 LIB libspdk_event_vhost_scsi.a 00:16:26.182 LIB libspdk_event_iscsi.a 00:16:26.182 SO libspdk_event_vhost_scsi.so.3.0 00:16:26.182 SO libspdk_event_iscsi.so.6.0 00:16:26.182 SYMLINK libspdk_event_vhost_scsi.so 00:16:26.442 SYMLINK libspdk_event_iscsi.so 00:16:26.442 SO libspdk.so.6.0 00:16:26.442 SYMLINK libspdk.so 00:16:26.700 TEST_HEADER include/spdk/accel.h 00:16:26.700 CC app/trace_record/trace_record.o 00:16:26.700 TEST_HEADER include/spdk/accel_module.h 00:16:26.700 TEST_HEADER include/spdk/assert.h 00:16:26.700 TEST_HEADER include/spdk/barrier.h 00:16:26.700 CC test/rpc_client/rpc_client_test.o 00:16:26.700 CXX app/trace/trace.o 00:16:26.700 TEST_HEADER include/spdk/base64.h 00:16:26.700 TEST_HEADER include/spdk/bdev.h 00:16:26.700 TEST_HEADER include/spdk/bdev_module.h 00:16:26.700 TEST_HEADER include/spdk/bdev_zone.h 00:16:26.700 TEST_HEADER include/spdk/bit_array.h 00:16:26.700 TEST_HEADER include/spdk/bit_pool.h 00:16:26.700 TEST_HEADER include/spdk/blob_bdev.h 00:16:26.700 TEST_HEADER include/spdk/blobfs_bdev.h 00:16:26.700 TEST_HEADER include/spdk/blobfs.h 00:16:26.700 TEST_HEADER include/spdk/blob.h 00:16:26.700 TEST_HEADER include/spdk/conf.h 00:16:26.700 TEST_HEADER include/spdk/config.h 00:16:26.700 TEST_HEADER include/spdk/cpuset.h 00:16:26.700 TEST_HEADER include/spdk/crc16.h 00:16:26.700 TEST_HEADER include/spdk/crc32.h 00:16:26.700 TEST_HEADER include/spdk/crc64.h 00:16:26.700 TEST_HEADER include/spdk/dif.h 00:16:26.700 TEST_HEADER include/spdk/dma.h 00:16:26.700 TEST_HEADER include/spdk/endian.h 00:16:26.700 TEST_HEADER include/spdk/env_dpdk.h 00:16:26.700 TEST_HEADER include/spdk/env.h 00:16:26.700 TEST_HEADER include/spdk/event.h 00:16:26.700 TEST_HEADER include/spdk/fd_group.h 00:16:26.700 TEST_HEADER include/spdk/fd.h 00:16:26.700 TEST_HEADER include/spdk/file.h 00:16:26.700 TEST_HEADER include/spdk/fsdev.h 00:16:26.700 TEST_HEADER include/spdk/fsdev_module.h 00:16:26.700 TEST_HEADER include/spdk/ftl.h 00:16:26.700 TEST_HEADER include/spdk/fuse_dispatcher.h 00:16:26.700 TEST_HEADER include/spdk/gpt_spec.h 00:16:26.700 TEST_HEADER include/spdk/hexlify.h 00:16:26.700 TEST_HEADER include/spdk/histogram_data.h 00:16:26.700 TEST_HEADER include/spdk/idxd.h 00:16:26.700 TEST_HEADER include/spdk/idxd_spec.h 00:16:26.700 TEST_HEADER include/spdk/init.h 00:16:26.700 CC examples/ioat/perf/perf.o 00:16:26.700 CC test/thread/poller_perf/poller_perf.o 00:16:26.700 TEST_HEADER include/spdk/ioat.h 00:16:26.700 TEST_HEADER include/spdk/ioat_spec.h 00:16:26.700 TEST_HEADER include/spdk/iscsi_spec.h 00:16:26.700 TEST_HEADER include/spdk/json.h 00:16:26.700 TEST_HEADER include/spdk/jsonrpc.h 00:16:26.700 TEST_HEADER include/spdk/keyring.h 00:16:26.700 TEST_HEADER include/spdk/keyring_module.h 00:16:26.700 CC examples/util/zipf/zipf.o 00:16:26.700 TEST_HEADER include/spdk/likely.h 00:16:26.700 TEST_HEADER include/spdk/log.h 00:16:26.700 TEST_HEADER include/spdk/lvol.h 00:16:26.700 TEST_HEADER include/spdk/md5.h 00:16:26.700 TEST_HEADER include/spdk/memory.h 00:16:26.700 TEST_HEADER include/spdk/mmio.h 00:16:26.700 TEST_HEADER include/spdk/nbd.h 00:16:26.700 TEST_HEADER include/spdk/net.h 00:16:26.700 TEST_HEADER include/spdk/notify.h 00:16:26.700 CC test/app/bdev_svc/bdev_svc.o 00:16:26.700 TEST_HEADER include/spdk/nvme.h 00:16:26.700 TEST_HEADER include/spdk/nvme_intel.h 00:16:26.700 TEST_HEADER include/spdk/nvme_ocssd.h 00:16:26.700 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:16:26.700 TEST_HEADER include/spdk/nvme_spec.h 00:16:26.700 CC test/dma/test_dma/test_dma.o 00:16:26.700 TEST_HEADER include/spdk/nvme_zns.h 00:16:26.701 TEST_HEADER include/spdk/nvmf_cmd.h 00:16:26.701 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:16:26.701 TEST_HEADER include/spdk/nvmf.h 00:16:26.701 TEST_HEADER include/spdk/nvmf_spec.h 00:16:26.701 TEST_HEADER include/spdk/nvmf_transport.h 00:16:26.701 TEST_HEADER include/spdk/opal.h 00:16:26.701 TEST_HEADER include/spdk/opal_spec.h 00:16:26.701 TEST_HEADER include/spdk/pci_ids.h 00:16:26.701 TEST_HEADER include/spdk/pipe.h 00:16:26.701 TEST_HEADER include/spdk/queue.h 00:16:26.701 TEST_HEADER include/spdk/reduce.h 00:16:26.701 TEST_HEADER include/spdk/rpc.h 00:16:26.701 TEST_HEADER include/spdk/scheduler.h 00:16:26.701 TEST_HEADER include/spdk/scsi.h 00:16:26.701 TEST_HEADER include/spdk/scsi_spec.h 00:16:26.701 TEST_HEADER include/spdk/sock.h 00:16:26.701 TEST_HEADER include/spdk/stdinc.h 00:16:26.701 TEST_HEADER include/spdk/string.h 00:16:26.701 TEST_HEADER include/spdk/thread.h 00:16:26.701 TEST_HEADER include/spdk/trace.h 00:16:26.701 CC test/env/mem_callbacks/mem_callbacks.o 00:16:26.701 TEST_HEADER include/spdk/trace_parser.h 00:16:26.701 TEST_HEADER include/spdk/tree.h 00:16:26.701 TEST_HEADER include/spdk/ublk.h 00:16:26.701 TEST_HEADER include/spdk/util.h 00:16:26.701 LINK rpc_client_test 00:16:26.701 TEST_HEADER include/spdk/uuid.h 00:16:26.701 TEST_HEADER include/spdk/version.h 00:16:26.701 TEST_HEADER include/spdk/vfio_user_pci.h 00:16:26.701 TEST_HEADER include/spdk/vfio_user_spec.h 00:16:26.701 TEST_HEADER include/spdk/vhost.h 00:16:26.701 TEST_HEADER include/spdk/vmd.h 00:16:26.701 TEST_HEADER include/spdk/xor.h 00:16:26.701 TEST_HEADER include/spdk/zipf.h 00:16:26.701 CXX test/cpp_headers/accel.o 00:16:26.958 LINK poller_perf 00:16:26.958 LINK spdk_trace_record 00:16:26.958 LINK zipf 00:16:26.958 LINK bdev_svc 00:16:26.958 LINK ioat_perf 00:16:26.958 CXX test/cpp_headers/accel_module.o 00:16:26.958 LINK spdk_trace 00:16:26.958 CC examples/interrupt_tgt/interrupt_tgt.o 00:16:26.958 CC app/nvmf_tgt/nvmf_main.o 00:16:26.958 CXX test/cpp_headers/assert.o 00:16:27.216 CC app/iscsi_tgt/iscsi_tgt.o 00:16:27.216 CC examples/ioat/verify/verify.o 00:16:27.216 CC test/event/event_perf/event_perf.o 00:16:27.216 CXX test/cpp_headers/barrier.o 00:16:27.216 LINK test_dma 00:16:27.216 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:16:27.216 LINK nvmf_tgt 00:16:27.216 CXX test/cpp_headers/base64.o 00:16:27.216 LINK interrupt_tgt 00:16:27.216 LINK iscsi_tgt 00:16:27.216 LINK event_perf 00:16:27.216 LINK mem_callbacks 00:16:27.216 LINK verify 00:16:27.476 CC app/spdk_tgt/spdk_tgt.o 00:16:27.476 CXX test/cpp_headers/bdev.o 00:16:27.476 CC app/spdk_lspci/spdk_lspci.o 00:16:27.476 CC test/event/reactor/reactor.o 00:16:27.476 CC test/event/reactor_perf/reactor_perf.o 00:16:27.476 CC test/env/vtophys/vtophys.o 00:16:27.476 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:16:27.476 CC test/accel/dif/dif.o 00:16:27.476 CXX test/cpp_headers/bdev_module.o 00:16:27.476 LINK spdk_tgt 00:16:27.476 LINK spdk_lspci 00:16:27.476 LINK reactor 00:16:27.737 LINK reactor_perf 00:16:27.737 LINK nvme_fuzz 00:16:27.737 CC examples/thread/thread/thread_ex.o 00:16:27.737 LINK vtophys 00:16:27.737 CXX test/cpp_headers/bdev_zone.o 00:16:27.737 CXX test/cpp_headers/bit_array.o 00:16:27.737 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:16:27.737 CC app/spdk_nvme_perf/perf.o 00:16:27.737 CC test/event/app_repeat/app_repeat.o 00:16:27.737 CC app/spdk_nvme_identify/identify.o 00:16:27.737 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:16:27.737 LINK thread 00:16:27.998 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:16:27.998 CXX test/cpp_headers/bit_pool.o 00:16:27.998 LINK app_repeat 00:16:27.998 CC test/env/memory/memory_ut.o 00:16:27.998 LINK env_dpdk_post_init 00:16:27.998 CXX test/cpp_headers/blob_bdev.o 00:16:28.259 CC examples/sock/hello_world/hello_sock.o 00:16:28.259 CC test/event/scheduler/scheduler.o 00:16:28.259 CC test/env/pci/pci_ut.o 00:16:28.259 CXX test/cpp_headers/blobfs_bdev.o 00:16:28.259 LINK dif 00:16:28.259 LINK vhost_fuzz 00:16:28.521 CXX test/cpp_headers/blobfs.o 00:16:28.521 LINK scheduler 00:16:28.521 CXX test/cpp_headers/blob.o 00:16:28.521 LINK hello_sock 00:16:28.521 CXX test/cpp_headers/conf.o 00:16:28.521 CXX test/cpp_headers/config.o 00:16:28.521 CXX test/cpp_headers/cpuset.o 00:16:28.521 CXX test/cpp_headers/crc16.o 00:16:28.521 CXX test/cpp_headers/crc32.o 00:16:28.521 LINK pci_ut 00:16:28.521 LINK spdk_nvme_perf 00:16:28.780 LINK spdk_nvme_identify 00:16:28.780 CC examples/vmd/lsvmd/lsvmd.o 00:16:28.780 CXX test/cpp_headers/crc64.o 00:16:28.780 CC test/blobfs/mkfs/mkfs.o 00:16:28.780 CXX test/cpp_headers/dif.o 00:16:28.780 LINK lsvmd 00:16:28.780 CC app/spdk_nvme_discover/discovery_aer.o 00:16:28.780 CC app/spdk_top/spdk_top.o 00:16:29.039 CC test/nvme/aer/aer.o 00:16:29.039 CC test/lvol/esnap/esnap.o 00:16:29.039 CC test/app/histogram_perf/histogram_perf.o 00:16:29.039 LINK mkfs 00:16:29.039 CXX test/cpp_headers/dma.o 00:16:29.039 CC examples/vmd/led/led.o 00:16:29.039 LINK spdk_nvme_discover 00:16:29.039 LINK memory_ut 00:16:29.039 LINK histogram_perf 00:16:29.039 CXX test/cpp_headers/endian.o 00:16:29.039 CXX test/cpp_headers/env_dpdk.o 00:16:29.297 LINK led 00:16:29.297 LINK aer 00:16:29.297 CC test/app/jsoncat/jsoncat.o 00:16:29.297 LINK iscsi_fuzz 00:16:29.297 CXX test/cpp_headers/env.o 00:16:29.297 CC app/vhost/vhost.o 00:16:29.297 CC app/spdk_dd/spdk_dd.o 00:16:29.297 CC test/bdev/bdevio/bdevio.o 00:16:29.297 LINK jsoncat 00:16:29.297 CC test/nvme/reset/reset.o 00:16:29.557 CXX test/cpp_headers/event.o 00:16:29.557 CC examples/idxd/perf/perf.o 00:16:29.557 LINK vhost 00:16:29.557 CXX test/cpp_headers/fd_group.o 00:16:29.557 CC test/app/stub/stub.o 00:16:29.557 CC examples/fsdev/hello_world/hello_fsdev.o 00:16:29.557 LINK spdk_dd 00:16:29.557 LINK reset 00:16:29.817 CXX test/cpp_headers/fd.o 00:16:29.817 LINK stub 00:16:29.817 LINK idxd_perf 00:16:29.817 CC app/fio/nvme/fio_plugin.o 00:16:29.817 LINK bdevio 00:16:29.817 CC test/nvme/sgl/sgl.o 00:16:29.817 CXX test/cpp_headers/file.o 00:16:29.817 LINK hello_fsdev 00:16:29.817 LINK spdk_top 00:16:29.817 CC app/fio/bdev/fio_plugin.o 00:16:30.078 CXX test/cpp_headers/fsdev.o 00:16:30.078 CC test/nvme/e2edp/nvme_dp.o 00:16:30.078 CC examples/accel/perf/accel_perf.o 00:16:30.078 CC test/nvme/overhead/overhead.o 00:16:30.078 CC examples/nvme/hello_world/hello_world.o 00:16:30.078 CC examples/blob/hello_world/hello_blob.o 00:16:30.078 LINK sgl 00:16:30.078 CXX test/cpp_headers/fsdev_module.o 00:16:30.340 LINK nvme_dp 00:16:30.340 CXX test/cpp_headers/ftl.o 00:16:30.340 LINK hello_world 00:16:30.340 CC test/nvme/err_injection/err_injection.o 00:16:30.340 LINK spdk_nvme 00:16:30.340 LINK hello_blob 00:16:30.340 LINK overhead 00:16:30.340 LINK spdk_bdev 00:16:30.340 CXX test/cpp_headers/fuse_dispatcher.o 00:16:30.601 CXX test/cpp_headers/gpt_spec.o 00:16:30.601 CC test/nvme/startup/startup.o 00:16:30.601 LINK err_injection 00:16:30.601 CC examples/nvme/reconnect/reconnect.o 00:16:30.601 CC examples/nvme/nvme_manage/nvme_manage.o 00:16:30.601 CXX test/cpp_headers/hexlify.o 00:16:30.601 CC examples/blob/cli/blobcli.o 00:16:30.601 CC test/nvme/reserve/reserve.o 00:16:30.601 LINK accel_perf 00:16:30.601 CXX test/cpp_headers/histogram_data.o 00:16:30.601 CXX test/cpp_headers/idxd.o 00:16:30.601 LINK startup 00:16:30.601 CXX test/cpp_headers/idxd_spec.o 00:16:30.862 LINK reserve 00:16:30.862 CXX test/cpp_headers/init.o 00:16:30.862 CC test/nvme/simple_copy/simple_copy.o 00:16:30.862 CXX test/cpp_headers/ioat.o 00:16:30.862 CC test/nvme/connect_stress/connect_stress.o 00:16:30.862 LINK reconnect 00:16:31.123 CC examples/bdev/hello_world/hello_bdev.o 00:16:31.123 CXX test/cpp_headers/ioat_spec.o 00:16:31.123 CC examples/nvme/arbitration/arbitration.o 00:16:31.123 CXX test/cpp_headers/iscsi_spec.o 00:16:31.123 LINK connect_stress 00:16:31.123 LINK simple_copy 00:16:31.123 LINK nvme_manage 00:16:31.123 CC examples/bdev/bdevperf/bdevperf.o 00:16:31.123 LINK blobcli 00:16:31.123 CXX test/cpp_headers/json.o 00:16:31.123 LINK hello_bdev 00:16:31.123 CC examples/nvme/hotplug/hotplug.o 00:16:31.382 CXX test/cpp_headers/jsonrpc.o 00:16:31.382 CC test/nvme/boot_partition/boot_partition.o 00:16:31.382 CC examples/nvme/cmb_copy/cmb_copy.o 00:16:31.382 CC test/nvme/compliance/nvme_compliance.o 00:16:31.382 LINK arbitration 00:16:31.382 CC test/nvme/fused_ordering/fused_ordering.o 00:16:31.382 CXX test/cpp_headers/keyring.o 00:16:31.382 LINK boot_partition 00:16:31.382 LINK cmb_copy 00:16:31.382 CC test/nvme/doorbell_aers/doorbell_aers.o 00:16:31.382 LINK hotplug 00:16:31.728 CC examples/nvme/abort/abort.o 00:16:31.728 CXX test/cpp_headers/keyring_module.o 00:16:31.728 CXX test/cpp_headers/likely.o 00:16:31.728 LINK fused_ordering 00:16:31.728 LINK doorbell_aers 00:16:31.728 CXX test/cpp_headers/log.o 00:16:31.728 LINK nvme_compliance 00:16:31.728 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:16:31.728 CXX test/cpp_headers/lvol.o 00:16:31.728 CXX test/cpp_headers/md5.o 00:16:31.728 CXX test/cpp_headers/memory.o 00:16:31.728 CC test/nvme/fdp/fdp.o 00:16:31.728 CC test/nvme/cuse/cuse.o 00:16:31.990 CXX test/cpp_headers/mmio.o 00:16:31.990 CXX test/cpp_headers/nbd.o 00:16:31.990 CXX test/cpp_headers/net.o 00:16:31.990 LINK pmr_persistence 00:16:31.991 LINK abort 00:16:31.991 CXX test/cpp_headers/notify.o 00:16:31.991 CXX test/cpp_headers/nvme.o 00:16:31.991 CXX test/cpp_headers/nvme_intel.o 00:16:31.991 LINK bdevperf 00:16:31.991 CXX test/cpp_headers/nvme_ocssd.o 00:16:31.991 CXX test/cpp_headers/nvme_ocssd_spec.o 00:16:31.991 CXX test/cpp_headers/nvme_spec.o 00:16:31.991 CXX test/cpp_headers/nvme_zns.o 00:16:31.991 CXX test/cpp_headers/nvmf_cmd.o 00:16:32.253 CXX test/cpp_headers/nvmf_fc_spec.o 00:16:32.253 LINK fdp 00:16:32.253 CXX test/cpp_headers/nvmf.o 00:16:32.253 CXX test/cpp_headers/nvmf_spec.o 00:16:32.253 CXX test/cpp_headers/nvmf_transport.o 00:16:32.253 CXX test/cpp_headers/opal.o 00:16:32.253 CXX test/cpp_headers/opal_spec.o 00:16:32.253 CXX test/cpp_headers/pci_ids.o 00:16:32.253 CXX test/cpp_headers/pipe.o 00:16:32.253 CXX test/cpp_headers/queue.o 00:16:32.253 CXX test/cpp_headers/reduce.o 00:16:32.253 CXX test/cpp_headers/rpc.o 00:16:32.514 CXX test/cpp_headers/scheduler.o 00:16:32.514 CXX test/cpp_headers/scsi.o 00:16:32.514 CXX test/cpp_headers/scsi_spec.o 00:16:32.514 CC examples/nvmf/nvmf/nvmf.o 00:16:32.514 CXX test/cpp_headers/sock.o 00:16:32.514 CXX test/cpp_headers/stdinc.o 00:16:32.514 CXX test/cpp_headers/string.o 00:16:32.514 CXX test/cpp_headers/thread.o 00:16:32.514 CXX test/cpp_headers/trace.o 00:16:32.514 CXX test/cpp_headers/trace_parser.o 00:16:32.514 CXX test/cpp_headers/tree.o 00:16:32.514 CXX test/cpp_headers/ublk.o 00:16:32.514 CXX test/cpp_headers/util.o 00:16:32.514 CXX test/cpp_headers/uuid.o 00:16:32.776 CXX test/cpp_headers/version.o 00:16:32.776 CXX test/cpp_headers/vfio_user_pci.o 00:16:32.776 CXX test/cpp_headers/vfio_user_spec.o 00:16:32.776 CXX test/cpp_headers/vhost.o 00:16:32.776 CXX test/cpp_headers/vmd.o 00:16:32.776 LINK nvmf 00:16:32.776 CXX test/cpp_headers/xor.o 00:16:32.776 CXX test/cpp_headers/zipf.o 00:16:33.038 LINK cuse 00:16:34.424 LINK esnap 00:16:35.036 00:16:35.036 real 1m6.292s 00:16:35.036 user 6m21.924s 00:16:35.036 sys 1m6.622s 00:16:35.036 20:14:29 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:16:35.036 20:14:29 make -- common/autotest_common.sh@10 -- $ set +x 00:16:35.036 ************************************ 00:16:35.036 END TEST make 00:16:35.036 ************************************ 00:16:35.036 20:14:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:16:35.036 20:14:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:16:35.037 20:14:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:16:35.037 20:14:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:35.037 20:14:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:16:35.037 20:14:29 -- pm/common@44 -- $ pid=5058 00:16:35.037 20:14:29 -- pm/common@50 -- $ kill -TERM 5058 00:16:35.037 20:14:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:35.037 20:14:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:16:35.037 20:14:29 -- pm/common@44 -- $ pid=5060 00:16:35.037 20:14:29 -- pm/common@50 -- $ kill -TERM 5060 00:16:35.037 20:14:30 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:35.037 20:14:30 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:35.037 20:14:30 -- common/autotest_common.sh@1681 -- # lcov --version 00:16:35.037 20:14:30 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:35.037 20:14:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.037 20:14:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.037 20:14:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.037 20:14:30 -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.037 20:14:30 -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.037 20:14:30 -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.037 20:14:30 -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.037 20:14:30 -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.037 20:14:30 -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.037 20:14:30 -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.037 20:14:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.037 20:14:30 -- scripts/common.sh@344 -- # case "$op" in 00:16:35.037 20:14:30 -- scripts/common.sh@345 -- # : 1 00:16:35.037 20:14:30 -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.037 20:14:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.037 20:14:30 -- scripts/common.sh@365 -- # decimal 1 00:16:35.037 20:14:30 -- scripts/common.sh@353 -- # local d=1 00:16:35.037 20:14:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.037 20:14:30 -- scripts/common.sh@355 -- # echo 1 00:16:35.037 20:14:30 -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.037 20:14:30 -- scripts/common.sh@366 -- # decimal 2 00:16:35.037 20:14:30 -- scripts/common.sh@353 -- # local d=2 00:16:35.037 20:14:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.037 20:14:30 -- scripts/common.sh@355 -- # echo 2 00:16:35.037 20:14:30 -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.037 20:14:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.037 20:14:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.037 20:14:30 -- scripts/common.sh@368 -- # return 0 00:16:35.037 20:14:30 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.037 20:14:30 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:35.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.037 --rc genhtml_branch_coverage=1 00:16:35.037 --rc genhtml_function_coverage=1 00:16:35.037 --rc genhtml_legend=1 00:16:35.037 --rc geninfo_all_blocks=1 00:16:35.037 --rc geninfo_unexecuted_blocks=1 00:16:35.037 00:16:35.037 ' 00:16:35.037 20:14:30 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:35.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.037 --rc genhtml_branch_coverage=1 00:16:35.037 --rc genhtml_function_coverage=1 00:16:35.037 --rc genhtml_legend=1 00:16:35.037 --rc geninfo_all_blocks=1 00:16:35.037 --rc geninfo_unexecuted_blocks=1 00:16:35.037 00:16:35.037 ' 00:16:35.037 20:14:30 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:35.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.037 --rc genhtml_branch_coverage=1 00:16:35.037 --rc genhtml_function_coverage=1 00:16:35.037 --rc genhtml_legend=1 00:16:35.037 --rc geninfo_all_blocks=1 00:16:35.037 --rc geninfo_unexecuted_blocks=1 00:16:35.037 00:16:35.037 ' 00:16:35.037 20:14:30 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:35.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.037 --rc genhtml_branch_coverage=1 00:16:35.037 --rc genhtml_function_coverage=1 00:16:35.037 --rc genhtml_legend=1 00:16:35.037 --rc geninfo_all_blocks=1 00:16:35.037 --rc geninfo_unexecuted_blocks=1 00:16:35.037 00:16:35.037 ' 00:16:35.037 20:14:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.037 20:14:30 -- nvmf/common.sh@7 -- # uname -s 00:16:35.037 20:14:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.037 20:14:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.037 20:14:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.037 20:14:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.037 20:14:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.037 20:14:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.037 20:14:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.037 20:14:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.037 20:14:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.037 20:14:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.037 20:14:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:aebd319b-9926-43bc-9bfe-64775317188f 00:16:35.037 20:14:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=aebd319b-9926-43bc-9bfe-64775317188f 00:16:35.037 20:14:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.037 20:14:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.037 20:14:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:35.037 20:14:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.037 20:14:30 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.037 20:14:30 -- scripts/common.sh@15 -- # shopt -s extglob 00:16:35.037 20:14:30 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.037 20:14:30 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.037 20:14:30 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.037 20:14:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.037 20:14:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.037 20:14:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.037 20:14:30 -- paths/export.sh@5 -- # export PATH 00:16:35.037 20:14:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.037 20:14:30 -- nvmf/common.sh@51 -- # : 0 00:16:35.037 20:14:30 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:35.037 20:14:30 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:35.037 20:14:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.037 20:14:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.037 20:14:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.037 20:14:30 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:35.037 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:35.037 20:14:30 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:35.037 20:14:30 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:35.037 20:14:30 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:35.037 20:14:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:16:35.037 20:14:30 -- spdk/autotest.sh@32 -- # uname -s 00:16:35.037 20:14:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:16:35.037 20:14:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:16:35.037 20:14:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:16:35.037 20:14:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:16:35.037 20:14:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:16:35.037 20:14:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:16:35.037 20:14:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:16:35.037 20:14:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:16:35.037 20:14:30 -- spdk/autotest.sh@48 -- # udevadm_pid=54582 00:16:35.037 20:14:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:16:35.037 20:14:30 -- pm/common@17 -- # local monitor 00:16:35.037 20:14:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:16:35.037 20:14:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:16:35.037 20:14:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:16:35.037 20:14:30 -- pm/common@25 -- # sleep 1 00:16:35.037 20:14:30 -- pm/common@21 -- # date +%s 00:16:35.037 20:14:30 -- pm/common@21 -- # date +%s 00:16:35.037 20:14:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727813670 00:16:35.037 20:14:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727813670 00:16:35.037 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727813670_collect-vmstat.pm.log 00:16:35.037 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727813670_collect-cpu-load.pm.log 00:16:35.986 20:14:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:16:35.986 20:14:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:16:35.986 20:14:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:35.986 20:14:31 -- common/autotest_common.sh@10 -- # set +x 00:16:35.986 20:14:31 -- spdk/autotest.sh@59 -- # create_test_list 00:16:35.986 20:14:31 -- common/autotest_common.sh@748 -- # xtrace_disable 00:16:35.986 20:14:31 -- common/autotest_common.sh@10 -- # set +x 00:16:36.247 20:14:31 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:16:36.247 20:14:31 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:16:36.247 20:14:31 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:16:36.247 20:14:31 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:16:36.247 20:14:31 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:16:36.247 20:14:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:16:36.247 20:14:31 -- common/autotest_common.sh@1455 -- # uname 00:16:36.247 20:14:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:16:36.247 20:14:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:16:36.247 20:14:31 -- common/autotest_common.sh@1475 -- # uname 00:16:36.247 20:14:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:16:36.247 20:14:31 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:16:36.247 20:14:31 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:16:36.247 lcov: LCOV version 1.15 00:16:36.247 20:14:31 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:16:51.217 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:16:51.217 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:17:03.450 20:14:58 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:17:03.450 20:14:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:03.450 20:14:58 -- common/autotest_common.sh@10 -- # set +x 00:17:03.450 20:14:58 -- spdk/autotest.sh@78 -- # rm -f 00:17:03.450 20:14:58 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:03.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:03.970 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:17:03.970 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:17:03.970 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:17:04.232 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:17:04.232 20:14:59 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:17:04.232 20:14:59 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:17:04.232 20:14:59 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:17:04.232 20:14:59 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:17:04.232 20:14:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:04.232 20:14:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:17:04.232 20:14:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:17:04.232 20:14:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:04.232 20:14:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:04.232 20:14:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:04.232 20:14:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:17:04.232 20:14:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:17:04.232 20:14:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:04.232 20:14:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:04.232 20:14:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:04.232 20:14:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:17:04.232 20:14:59 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:17:04.232 20:14:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:17:04.232 20:14:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:04.232 20:14:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:04.232 20:14:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:17:04.232 20:14:59 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:17:04.232 20:14:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:17:04.232 20:14:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:04.232 20:14:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:04.232 20:14:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:17:04.232 20:14:59 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:17:04.232 20:14:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:17:04.232 20:14:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:04.232 20:14:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:04.232 20:14:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:17:04.232 20:14:59 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:17:04.232 20:14:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:17:04.232 20:14:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:04.232 20:14:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:04.232 20:14:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:17:04.232 20:14:59 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:17:04.232 20:14:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:17:04.232 20:14:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:04.232 20:14:59 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:17:04.232 20:14:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:04.232 20:14:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:04.232 20:14:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:17:04.232 20:14:59 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:17:04.232 20:14:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:17:04.232 No valid GPT data, bailing 00:17:04.232 20:14:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:04.232 20:14:59 -- scripts/common.sh@394 -- # pt= 00:17:04.232 20:14:59 -- scripts/common.sh@395 -- # return 1 00:17:04.232 20:14:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:17:04.232 1+0 records in 00:17:04.232 1+0 records out 00:17:04.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123723 s, 84.8 MB/s 00:17:04.232 20:14:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:04.232 20:14:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:04.232 20:14:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:17:04.232 20:14:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:17:04.232 20:14:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:17:04.232 No valid GPT data, bailing 00:17:04.232 20:14:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:04.232 20:14:59 -- scripts/common.sh@394 -- # pt= 00:17:04.232 20:14:59 -- scripts/common.sh@395 -- # return 1 00:17:04.232 20:14:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:17:04.232 1+0 records in 00:17:04.232 1+0 records out 00:17:04.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00355927 s, 295 MB/s 00:17:04.232 20:14:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:04.232 20:14:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:04.232 20:14:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:17:04.233 20:14:59 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:17:04.233 20:14:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:17:04.233 No valid GPT data, bailing 00:17:04.233 20:14:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:17:04.233 20:14:59 -- scripts/common.sh@394 -- # pt= 00:17:04.233 20:14:59 -- scripts/common.sh@395 -- # return 1 00:17:04.233 20:14:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:17:04.233 1+0 records in 00:17:04.233 1+0 records out 00:17:04.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00251064 s, 418 MB/s 00:17:04.233 20:14:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:04.233 20:14:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:04.233 20:14:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:17:04.233 20:14:59 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:17:04.233 20:14:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:17:04.233 No valid GPT data, bailing 00:17:04.233 20:14:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:17:04.494 20:14:59 -- scripts/common.sh@394 -- # pt= 00:17:04.494 20:14:59 -- scripts/common.sh@395 -- # return 1 00:17:04.494 20:14:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:17:04.494 1+0 records in 00:17:04.494 1+0 records out 00:17:04.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00391304 s, 268 MB/s 00:17:04.494 20:14:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:04.494 20:14:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:04.494 20:14:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:17:04.494 20:14:59 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:17:04.494 20:14:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:17:04.494 No valid GPT data, bailing 00:17:04.494 20:14:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:17:04.494 20:14:59 -- scripts/common.sh@394 -- # pt= 00:17:04.494 20:14:59 -- scripts/common.sh@395 -- # return 1 00:17:04.494 20:14:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:17:04.494 1+0 records in 00:17:04.494 1+0 records out 00:17:04.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00227509 s, 461 MB/s 00:17:04.494 20:14:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:04.494 20:14:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:04.494 20:14:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:17:04.494 20:14:59 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:17:04.494 20:14:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:17:04.494 No valid GPT data, bailing 00:17:04.494 20:14:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:17:04.494 20:14:59 -- scripts/common.sh@394 -- # pt= 00:17:04.494 20:14:59 -- scripts/common.sh@395 -- # return 1 00:17:04.494 20:14:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:17:04.494 1+0 records in 00:17:04.494 1+0 records out 00:17:04.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473622 s, 221 MB/s 00:17:04.494 20:14:59 -- spdk/autotest.sh@105 -- # sync 00:17:04.755 20:14:59 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:17:04.755 20:14:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:17:04.755 20:14:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:17:06.159 20:15:01 -- spdk/autotest.sh@111 -- # uname -s 00:17:06.159 20:15:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:17:06.159 20:15:01 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:17:06.159 20:15:01 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:17:06.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:07.040 Hugepages 00:17:07.040 node hugesize free / total 00:17:07.040 node0 1048576kB 0 / 0 00:17:07.040 node0 2048kB 0 / 0 00:17:07.040 00:17:07.040 Type BDF Vendor Device NUMA Driver Device Block devices 00:17:07.040 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:17:07.041 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:17:07.041 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:17:07.041 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:17:07.301 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:17:07.301 20:15:02 -- spdk/autotest.sh@117 -- # uname -s 00:17:07.301 20:15:02 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:17:07.301 20:15:02 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:17:07.301 20:15:02 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:07.562 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:08.135 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:08.135 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:08.135 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:08.135 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:08.135 20:15:03 -- common/autotest_common.sh@1515 -- # sleep 1 00:17:09.518 20:15:04 -- common/autotest_common.sh@1516 -- # bdfs=() 00:17:09.518 20:15:04 -- common/autotest_common.sh@1516 -- # local bdfs 00:17:09.518 20:15:04 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:17:09.518 20:15:04 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:17:09.518 20:15:04 -- common/autotest_common.sh@1496 -- # bdfs=() 00:17:09.518 20:15:04 -- common/autotest_common.sh@1496 -- # local bdfs 00:17:09.518 20:15:04 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:09.518 20:15:04 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:09.518 20:15:04 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:17:09.518 20:15:04 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:17:09.518 20:15:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:17:09.518 20:15:04 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:09.518 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:09.781 Waiting for block devices as requested 00:17:09.781 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:09.781 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:09.781 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:10.041 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:15.337 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:15.337 20:15:10 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:17:15.337 20:15:10 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:17:15.337 20:15:10 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:17:15.337 20:15:10 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:17:15.337 20:15:10 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:17:15.337 20:15:10 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:17:15.337 20:15:10 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:17:15.337 20:15:10 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:17:15.337 20:15:10 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:17:15.337 20:15:10 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:17:15.337 20:15:10 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:17:15.337 20:15:10 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:17:15.337 20:15:10 -- common/autotest_common.sh@1529 -- # grep oacs 00:17:15.337 20:15:10 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:17:15.337 20:15:10 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:17:15.337 20:15:10 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:17:15.337 20:15:10 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:17:15.337 20:15:10 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:17:15.337 20:15:10 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:17:15.337 20:15:10 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:17:15.337 20:15:10 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:17:15.337 20:15:10 -- common/autotest_common.sh@1541 -- # continue 00:17:15.337 20:15:10 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:17:15.337 20:15:10 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:17:15.337 20:15:10 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:17:15.337 20:15:10 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:17:15.337 20:15:10 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:17:15.337 20:15:10 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:17:15.337 20:15:10 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:17:15.337 20:15:10 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:17:15.337 20:15:10 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:17:15.337 20:15:10 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:17:15.337 20:15:10 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:17:15.337 20:15:10 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:17:15.337 20:15:10 -- common/autotest_common.sh@1529 -- # grep oacs 00:17:15.337 20:15:10 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:17:15.338 20:15:10 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:17:15.338 20:15:10 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:17:15.338 20:15:10 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:17:15.338 20:15:10 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:17:15.338 20:15:10 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:17:15.338 20:15:10 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:17:15.338 20:15:10 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:17:15.338 20:15:10 -- common/autotest_common.sh@1541 -- # continue 00:17:15.338 20:15:10 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:17:15.338 20:15:10 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:17:15.338 20:15:10 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:17:15.338 20:15:10 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:17:15.338 20:15:10 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:17:15.338 20:15:10 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:17:15.338 20:15:10 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:17:15.338 20:15:10 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:17:15.338 20:15:10 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:17:15.338 20:15:10 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:17:15.338 20:15:10 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:17:15.338 20:15:10 -- common/autotest_common.sh@1529 -- # grep oacs 00:17:15.338 20:15:10 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:17:15.338 20:15:10 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:17:15.338 20:15:10 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:17:15.338 20:15:10 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:17:15.338 20:15:10 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:17:15.338 20:15:10 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:17:15.338 20:15:10 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:17:15.338 20:15:10 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:17:15.338 20:15:10 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:17:15.338 20:15:10 -- common/autotest_common.sh@1541 -- # continue 00:17:15.338 20:15:10 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:17:15.338 20:15:10 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:17:15.338 20:15:10 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:17:15.338 20:15:10 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:17:15.338 20:15:10 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:17:15.338 20:15:10 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:17:15.338 20:15:10 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:17:15.338 20:15:10 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:17:15.338 20:15:10 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:17:15.338 20:15:10 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:17:15.338 20:15:10 -- common/autotest_common.sh@1529 -- # grep oacs 00:17:15.338 20:15:10 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:17:15.338 20:15:10 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:17:15.338 20:15:10 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:17:15.338 20:15:10 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:17:15.338 20:15:10 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:17:15.338 20:15:10 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:17:15.338 20:15:10 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:17:15.338 20:15:10 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:17:15.338 20:15:10 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:17:15.338 20:15:10 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:17:15.338 20:15:10 -- common/autotest_common.sh@1541 -- # continue 00:17:15.338 20:15:10 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:17:15.338 20:15:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:15.338 20:15:10 -- common/autotest_common.sh@10 -- # set +x 00:17:15.338 20:15:10 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:17:15.338 20:15:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:15.338 20:15:10 -- common/autotest_common.sh@10 -- # set +x 00:17:15.338 20:15:10 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:15.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:16.161 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:16.161 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:16.161 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:16.161 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:16.161 20:15:11 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:17:16.161 20:15:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:16.161 20:15:11 -- common/autotest_common.sh@10 -- # set +x 00:17:16.161 20:15:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:17:16.161 20:15:11 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:17:16.161 20:15:11 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:17:16.161 20:15:11 -- common/autotest_common.sh@1561 -- # bdfs=() 00:17:16.161 20:15:11 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:17:16.161 20:15:11 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:17:16.161 20:15:11 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:17:16.161 20:15:11 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:17:16.161 20:15:11 -- common/autotest_common.sh@1496 -- # bdfs=() 00:17:16.161 20:15:11 -- common/autotest_common.sh@1496 -- # local bdfs 00:17:16.161 20:15:11 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:16.161 20:15:11 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:16.161 20:15:11 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:17:16.161 20:15:11 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:17:16.161 20:15:11 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:17:16.161 20:15:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:17:16.161 20:15:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:17:16.161 20:15:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:17:16.161 20:15:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:16.161 20:15:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:17:16.161 20:15:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:17:16.161 20:15:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:17:16.161 20:15:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:16.161 20:15:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:17:16.161 20:15:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:17:16.161 20:15:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:17:16.161 20:15:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:16.161 20:15:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:17:16.161 20:15:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:17:16.161 20:15:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:17:16.161 20:15:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:16.161 20:15:11 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:17:16.161 20:15:11 -- common/autotest_common.sh@1570 -- # return 0 00:17:16.161 20:15:11 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:17:16.161 20:15:11 -- common/autotest_common.sh@1578 -- # return 0 00:17:16.161 20:15:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:17:16.161 20:15:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:17:16.161 20:15:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:17:16.161 20:15:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:17:16.161 20:15:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:17:16.161 20:15:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:16.161 20:15:11 -- common/autotest_common.sh@10 -- # set +x 00:17:16.161 20:15:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:17:16.161 20:15:11 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:16.161 20:15:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:16.161 20:15:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.161 20:15:11 -- common/autotest_common.sh@10 -- # set +x 00:17:16.161 ************************************ 00:17:16.161 START TEST env 00:17:16.161 ************************************ 00:17:16.161 20:15:11 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:16.419 * Looking for test storage... 00:17:16.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:17:16.419 20:15:11 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:16.419 20:15:11 env -- common/autotest_common.sh@1681 -- # lcov --version 00:17:16.419 20:15:11 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:16.419 20:15:11 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:16.419 20:15:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.419 20:15:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.419 20:15:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.419 20:15:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.419 20:15:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.419 20:15:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.419 20:15:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.419 20:15:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.419 20:15:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.419 20:15:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.419 20:15:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.419 20:15:11 env -- scripts/common.sh@344 -- # case "$op" in 00:17:16.419 20:15:11 env -- scripts/common.sh@345 -- # : 1 00:17:16.419 20:15:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.419 20:15:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.419 20:15:11 env -- scripts/common.sh@365 -- # decimal 1 00:17:16.419 20:15:11 env -- scripts/common.sh@353 -- # local d=1 00:17:16.419 20:15:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.419 20:15:11 env -- scripts/common.sh@355 -- # echo 1 00:17:16.419 20:15:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.419 20:15:11 env -- scripts/common.sh@366 -- # decimal 2 00:17:16.419 20:15:11 env -- scripts/common.sh@353 -- # local d=2 00:17:16.419 20:15:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.419 20:15:11 env -- scripts/common.sh@355 -- # echo 2 00:17:16.419 20:15:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.419 20:15:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.419 20:15:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.419 20:15:11 env -- scripts/common.sh@368 -- # return 0 00:17:16.419 20:15:11 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.419 20:15:11 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:16.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.419 --rc genhtml_branch_coverage=1 00:17:16.419 --rc genhtml_function_coverage=1 00:17:16.419 --rc genhtml_legend=1 00:17:16.419 --rc geninfo_all_blocks=1 00:17:16.419 --rc geninfo_unexecuted_blocks=1 00:17:16.419 00:17:16.419 ' 00:17:16.419 20:15:11 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:16.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.419 --rc genhtml_branch_coverage=1 00:17:16.419 --rc genhtml_function_coverage=1 00:17:16.419 --rc genhtml_legend=1 00:17:16.419 --rc geninfo_all_blocks=1 00:17:16.419 --rc geninfo_unexecuted_blocks=1 00:17:16.419 00:17:16.419 ' 00:17:16.419 20:15:11 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:16.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.419 --rc genhtml_branch_coverage=1 00:17:16.419 --rc genhtml_function_coverage=1 00:17:16.419 --rc genhtml_legend=1 00:17:16.419 --rc geninfo_all_blocks=1 00:17:16.419 --rc geninfo_unexecuted_blocks=1 00:17:16.419 00:17:16.419 ' 00:17:16.419 20:15:11 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:16.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.419 --rc genhtml_branch_coverage=1 00:17:16.419 --rc genhtml_function_coverage=1 00:17:16.419 --rc genhtml_legend=1 00:17:16.419 --rc geninfo_all_blocks=1 00:17:16.419 --rc geninfo_unexecuted_blocks=1 00:17:16.419 00:17:16.419 ' 00:17:16.419 20:15:11 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:16.419 20:15:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:16.419 20:15:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.419 20:15:11 env -- common/autotest_common.sh@10 -- # set +x 00:17:16.419 ************************************ 00:17:16.419 START TEST env_memory 00:17:16.419 ************************************ 00:17:16.419 20:15:11 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:16.419 00:17:16.419 00:17:16.419 CUnit - A unit testing framework for C - Version 2.1-3 00:17:16.419 http://cunit.sourceforge.net/ 00:17:16.419 00:17:16.419 00:17:16.419 Suite: memory 00:17:16.419 Test: alloc and free memory map ...[2024-10-01 20:15:11.505254] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:17:16.419 passed 00:17:16.419 Test: mem map translation ...[2024-10-01 20:15:11.535180] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:17:16.419 [2024-10-01 20:15:11.535244] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:17:16.419 [2024-10-01 20:15:11.535294] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:17:16.419 [2024-10-01 20:15:11.535307] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:17:16.419 passed 00:17:16.419 Test: mem map registration ...[2024-10-01 20:15:11.589644] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:17:16.419 [2024-10-01 20:15:11.589703] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:17:16.419 passed 00:17:16.678 Test: mem map adjacent registrations ...passed 00:17:16.678 00:17:16.678 Run Summary: Type Total Ran Passed Failed Inactive 00:17:16.678 suites 1 1 n/a 0 0 00:17:16.678 tests 4 4 4 0 0 00:17:16.678 asserts 152 152 152 0 n/a 00:17:16.678 00:17:16.678 Elapsed time = 0.186 seconds 00:17:16.678 00:17:16.678 real 0m0.214s 00:17:16.678 user 0m0.193s 00:17:16.678 sys 0m0.016s 00:17:16.678 20:15:11 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.678 20:15:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:17:16.678 ************************************ 00:17:16.678 END TEST env_memory 00:17:16.678 ************************************ 00:17:16.678 20:15:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:16.678 20:15:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:16.678 20:15:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.678 20:15:11 env -- common/autotest_common.sh@10 -- # set +x 00:17:16.678 ************************************ 00:17:16.678 START TEST env_vtophys 00:17:16.678 ************************************ 00:17:16.678 20:15:11 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:16.678 EAL: lib.eal log level changed from notice to debug 00:17:16.678 EAL: Detected lcore 0 as core 0 on socket 0 00:17:16.678 EAL: Detected lcore 1 as core 0 on socket 0 00:17:16.678 EAL: Detected lcore 2 as core 0 on socket 0 00:17:16.678 EAL: Detected lcore 3 as core 0 on socket 0 00:17:16.678 EAL: Detected lcore 4 as core 0 on socket 0 00:17:16.678 EAL: Detected lcore 5 as core 0 on socket 0 00:17:16.678 EAL: Detected lcore 6 as core 0 on socket 0 00:17:16.678 EAL: Detected lcore 7 as core 0 on socket 0 00:17:16.678 EAL: Detected lcore 8 as core 0 on socket 0 00:17:16.678 EAL: Detected lcore 9 as core 0 on socket 0 00:17:16.678 EAL: Maximum logical cores by configuration: 128 00:17:16.678 EAL: Detected CPU lcores: 10 00:17:16.678 EAL: Detected NUMA nodes: 1 00:17:16.678 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:17:16.678 EAL: Detected shared linkage of DPDK 00:17:16.678 EAL: No shared files mode enabled, IPC will be disabled 00:17:16.678 EAL: Selected IOVA mode 'PA' 00:17:16.678 EAL: Probing VFIO support... 00:17:16.678 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:16.678 EAL: VFIO modules not loaded, skipping VFIO support... 00:17:16.678 EAL: Ask a virtual area of 0x2e000 bytes 00:17:16.678 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:17:16.678 EAL: Setting up physically contiguous memory... 00:17:16.678 EAL: Setting maximum number of open files to 524288 00:17:16.678 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:17:16.678 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:17:16.678 EAL: Ask a virtual area of 0x61000 bytes 00:17:16.678 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:17:16.678 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:16.678 EAL: Ask a virtual area of 0x400000000 bytes 00:17:16.678 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:17:16.678 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:17:16.678 EAL: Ask a virtual area of 0x61000 bytes 00:17:16.678 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:17:16.678 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:16.678 EAL: Ask a virtual area of 0x400000000 bytes 00:17:16.678 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:17:16.678 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:17:16.678 EAL: Ask a virtual area of 0x61000 bytes 00:17:16.678 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:17:16.678 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:16.678 EAL: Ask a virtual area of 0x400000000 bytes 00:17:16.678 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:17:16.678 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:17:16.678 EAL: Ask a virtual area of 0x61000 bytes 00:17:16.678 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:17:16.678 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:16.678 EAL: Ask a virtual area of 0x400000000 bytes 00:17:16.678 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:17:16.678 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:17:16.678 EAL: Hugepages will be freed exactly as allocated. 00:17:16.678 EAL: No shared files mode enabled, IPC is disabled 00:17:16.678 EAL: No shared files mode enabled, IPC is disabled 00:17:16.678 EAL: TSC frequency is ~2600000 KHz 00:17:16.678 EAL: Main lcore 0 is ready (tid=7fa721b1aa40;cpuset=[0]) 00:17:16.678 EAL: Trying to obtain current memory policy. 00:17:16.678 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:16.678 EAL: Restoring previous memory policy: 0 00:17:16.678 EAL: request: mp_malloc_sync 00:17:16.678 EAL: No shared files mode enabled, IPC is disabled 00:17:16.678 EAL: Heap on socket 0 was expanded by 2MB 00:17:16.678 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:16.678 EAL: No PCI address specified using 'addr=' in: bus=pci 00:17:16.678 EAL: Mem event callback 'spdk:(nil)' registered 00:17:16.678 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:17:16.678 00:17:16.678 00:17:16.678 CUnit - A unit testing framework for C - Version 2.1-3 00:17:16.678 http://cunit.sourceforge.net/ 00:17:16.678 00:17:16.678 00:17:16.678 Suite: components_suite 00:17:17.244 Test: vtophys_malloc_test ...passed 00:17:17.244 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:17:17.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:17.244 EAL: Restoring previous memory policy: 4 00:17:17.244 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.244 EAL: request: mp_malloc_sync 00:17:17.244 EAL: No shared files mode enabled, IPC is disabled 00:17:17.244 EAL: Heap on socket 0 was expanded by 4MB 00:17:17.244 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.244 EAL: request: mp_malloc_sync 00:17:17.244 EAL: No shared files mode enabled, IPC is disabled 00:17:17.244 EAL: Heap on socket 0 was shrunk by 4MB 00:17:17.244 EAL: Trying to obtain current memory policy. 00:17:17.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:17.244 EAL: Restoring previous memory policy: 4 00:17:17.244 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.244 EAL: request: mp_malloc_sync 00:17:17.244 EAL: No shared files mode enabled, IPC is disabled 00:17:17.244 EAL: Heap on socket 0 was expanded by 6MB 00:17:17.244 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.244 EAL: request: mp_malloc_sync 00:17:17.244 EAL: No shared files mode enabled, IPC is disabled 00:17:17.244 EAL: Heap on socket 0 was shrunk by 6MB 00:17:17.244 EAL: Trying to obtain current memory policy. 00:17:17.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:17.244 EAL: Restoring previous memory policy: 4 00:17:17.244 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.244 EAL: request: mp_malloc_sync 00:17:17.244 EAL: No shared files mode enabled, IPC is disabled 00:17:17.244 EAL: Heap on socket 0 was expanded by 10MB 00:17:17.244 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.244 EAL: request: mp_malloc_sync 00:17:17.244 EAL: No shared files mode enabled, IPC is disabled 00:17:17.244 EAL: Heap on socket 0 was shrunk by 10MB 00:17:17.244 EAL: Trying to obtain current memory policy. 00:17:17.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:17.244 EAL: Restoring previous memory policy: 4 00:17:17.244 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.244 EAL: request: mp_malloc_sync 00:17:17.244 EAL: No shared files mode enabled, IPC is disabled 00:17:17.244 EAL: Heap on socket 0 was expanded by 18MB 00:17:17.244 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.244 EAL: request: mp_malloc_sync 00:17:17.244 EAL: No shared files mode enabled, IPC is disabled 00:17:17.244 EAL: Heap on socket 0 was shrunk by 18MB 00:17:17.244 EAL: Trying to obtain current memory policy. 00:17:17.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:17.244 EAL: Restoring previous memory policy: 4 00:17:17.244 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.244 EAL: request: mp_malloc_sync 00:17:17.244 EAL: No shared files mode enabled, IPC is disabled 00:17:17.244 EAL: Heap on socket 0 was expanded by 34MB 00:17:17.244 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.244 EAL: request: mp_malloc_sync 00:17:17.244 EAL: No shared files mode enabled, IPC is disabled 00:17:17.244 EAL: Heap on socket 0 was shrunk by 34MB 00:17:17.244 EAL: Trying to obtain current memory policy. 00:17:17.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:17.244 EAL: Restoring previous memory policy: 4 00:17:17.244 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.244 EAL: request: mp_malloc_sync 00:17:17.244 EAL: No shared files mode enabled, IPC is disabled 00:17:17.244 EAL: Heap on socket 0 was expanded by 66MB 00:17:17.244 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.244 EAL: request: mp_malloc_sync 00:17:17.244 EAL: No shared files mode enabled, IPC is disabled 00:17:17.244 EAL: Heap on socket 0 was shrunk by 66MB 00:17:17.502 EAL: Trying to obtain current memory policy. 00:17:17.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:17.502 EAL: Restoring previous memory policy: 4 00:17:17.502 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.502 EAL: request: mp_malloc_sync 00:17:17.502 EAL: No shared files mode enabled, IPC is disabled 00:17:17.502 EAL: Heap on socket 0 was expanded by 130MB 00:17:17.502 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.502 EAL: request: mp_malloc_sync 00:17:17.502 EAL: No shared files mode enabled, IPC is disabled 00:17:17.502 EAL: Heap on socket 0 was shrunk by 130MB 00:17:17.760 EAL: Trying to obtain current memory policy. 00:17:17.760 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:17.760 EAL: Restoring previous memory policy: 4 00:17:17.760 EAL: Calling mem event callback 'spdk:(nil)' 00:17:17.760 EAL: request: mp_malloc_sync 00:17:17.760 EAL: No shared files mode enabled, IPC is disabled 00:17:17.760 EAL: Heap on socket 0 was expanded by 258MB 00:17:18.026 EAL: Calling mem event callback 'spdk:(nil)' 00:17:18.026 EAL: request: mp_malloc_sync 00:17:18.026 EAL: No shared files mode enabled, IPC is disabled 00:17:18.026 EAL: Heap on socket 0 was shrunk by 258MB 00:17:18.287 EAL: Trying to obtain current memory policy. 00:17:18.287 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:18.547 EAL: Restoring previous memory policy: 4 00:17:18.547 EAL: Calling mem event callback 'spdk:(nil)' 00:17:18.547 EAL: request: mp_malloc_sync 00:17:18.547 EAL: No shared files mode enabled, IPC is disabled 00:17:18.547 EAL: Heap on socket 0 was expanded by 514MB 00:17:19.111 EAL: Calling mem event callback 'spdk:(nil)' 00:17:19.111 EAL: request: mp_malloc_sync 00:17:19.111 EAL: No shared files mode enabled, IPC is disabled 00:17:19.111 EAL: Heap on socket 0 was shrunk by 514MB 00:17:19.675 EAL: Trying to obtain current memory policy. 00:17:19.675 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:19.675 EAL: Restoring previous memory policy: 4 00:17:19.675 EAL: Calling mem event callback 'spdk:(nil)' 00:17:19.675 EAL: request: mp_malloc_sync 00:17:19.675 EAL: No shared files mode enabled, IPC is disabled 00:17:19.675 EAL: Heap on socket 0 was expanded by 1026MB 00:17:21.052 EAL: Calling mem event callback 'spdk:(nil)' 00:17:21.053 EAL: request: mp_malloc_sync 00:17:21.053 EAL: No shared files mode enabled, IPC is disabled 00:17:21.053 EAL: Heap on socket 0 was shrunk by 1026MB 00:17:21.986 passed 00:17:21.986 00:17:21.986 Run Summary: Type Total Ran Passed Failed Inactive 00:17:21.986 suites 1 1 n/a 0 0 00:17:21.986 tests 2 2 2 0 0 00:17:21.986 asserts 5922 5922 5922 0 n/a 00:17:21.986 00:17:21.986 Elapsed time = 4.949 seconds 00:17:21.986 EAL: Calling mem event callback 'spdk:(nil)' 00:17:21.986 EAL: request: mp_malloc_sync 00:17:21.986 EAL: No shared files mode enabled, IPC is disabled 00:17:21.986 EAL: Heap on socket 0 was shrunk by 2MB 00:17:21.986 EAL: No shared files mode enabled, IPC is disabled 00:17:21.986 EAL: No shared files mode enabled, IPC is disabled 00:17:21.986 EAL: No shared files mode enabled, IPC is disabled 00:17:21.986 00:17:21.986 real 0m5.187s 00:17:21.986 user 0m4.410s 00:17:21.986 sys 0m0.626s 00:17:21.986 20:15:16 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:21.986 20:15:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:17:21.986 ************************************ 00:17:21.986 END TEST env_vtophys 00:17:21.986 ************************************ 00:17:21.986 20:15:16 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:21.986 20:15:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:21.986 20:15:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:21.986 20:15:16 env -- common/autotest_common.sh@10 -- # set +x 00:17:21.986 ************************************ 00:17:21.986 START TEST env_pci 00:17:21.986 ************************************ 00:17:21.986 20:15:16 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:21.986 00:17:21.986 00:17:21.986 CUnit - A unit testing framework for C - Version 2.1-3 00:17:21.986 http://cunit.sourceforge.net/ 00:17:21.986 00:17:21.986 00:17:21.986 Suite: pci 00:17:21.986 Test: pci_hook ...[2024-10-01 20:15:16.963368] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57317 has claimed it 00:17:21.986 EAL: Cannot find device (10000:00:01.0) 00:17:21.986 passed 00:17:21.986 00:17:21.986 Run Summary: Type Total Ran Passed Failed Inactive 00:17:21.986 suites 1 1 n/a 0 0 00:17:21.986 tests 1 1 1 0 0 00:17:21.986 asserts 25 25 25 0 n/a 00:17:21.986 00:17:21.986 Elapsed time = 0.007 seconds 00:17:21.986 EAL: Failed to attach device on primary process 00:17:21.986 00:17:21.986 real 0m0.062s 00:17:21.986 user 0m0.027s 00:17:21.986 sys 0m0.034s 00:17:21.986 20:15:16 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:21.986 20:15:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:17:21.986 ************************************ 00:17:21.986 END TEST env_pci 00:17:21.986 ************************************ 00:17:21.986 20:15:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:17:21.986 20:15:17 env -- env/env.sh@15 -- # uname 00:17:21.986 20:15:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:17:21.986 20:15:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:17:21.986 20:15:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:21.986 20:15:17 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:21.986 20:15:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:21.986 20:15:17 env -- common/autotest_common.sh@10 -- # set +x 00:17:21.986 ************************************ 00:17:21.986 START TEST env_dpdk_post_init 00:17:21.986 ************************************ 00:17:21.986 20:15:17 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:21.986 EAL: Detected CPU lcores: 10 00:17:21.986 EAL: Detected NUMA nodes: 1 00:17:21.986 EAL: Detected shared linkage of DPDK 00:17:21.986 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:21.986 EAL: Selected IOVA mode 'PA' 00:17:22.245 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:22.245 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:17:22.245 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:17:22.245 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:17:22.245 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:17:22.245 Starting DPDK initialization... 00:17:22.245 Starting SPDK post initialization... 00:17:22.245 SPDK NVMe probe 00:17:22.245 Attaching to 0000:00:10.0 00:17:22.245 Attaching to 0000:00:11.0 00:17:22.245 Attaching to 0000:00:12.0 00:17:22.245 Attaching to 0000:00:13.0 00:17:22.245 Attached to 0000:00:10.0 00:17:22.245 Attached to 0000:00:11.0 00:17:22.245 Attached to 0000:00:13.0 00:17:22.245 Attached to 0000:00:12.0 00:17:22.245 Cleaning up... 00:17:22.245 ************************************ 00:17:22.245 END TEST env_dpdk_post_init 00:17:22.245 ************************************ 00:17:22.245 00:17:22.245 real 0m0.229s 00:17:22.245 user 0m0.069s 00:17:22.245 sys 0m0.062s 00:17:22.245 20:15:17 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.245 20:15:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:17:22.245 20:15:17 env -- env/env.sh@26 -- # uname 00:17:22.245 20:15:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:17:22.245 20:15:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:22.245 20:15:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:22.245 20:15:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.245 20:15:17 env -- common/autotest_common.sh@10 -- # set +x 00:17:22.245 ************************************ 00:17:22.245 START TEST env_mem_callbacks 00:17:22.245 ************************************ 00:17:22.245 20:15:17 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:22.245 EAL: Detected CPU lcores: 10 00:17:22.245 EAL: Detected NUMA nodes: 1 00:17:22.245 EAL: Detected shared linkage of DPDK 00:17:22.245 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:22.245 EAL: Selected IOVA mode 'PA' 00:17:22.503 00:17:22.503 00:17:22.503 CUnit - A unit testing framework for C - Version 2.1-3 00:17:22.503 http://cunit.sourceforge.net/ 00:17:22.503 00:17:22.503 00:17:22.503 Suite: memory 00:17:22.503 Test: test ... 00:17:22.503 register 0x200000200000 2097152 00:17:22.503 malloc 3145728 00:17:22.503 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:22.503 register 0x200000400000 4194304 00:17:22.503 buf 0x2000004fffc0 len 3145728 PASSED 00:17:22.503 malloc 64 00:17:22.503 buf 0x2000004ffec0 len 64 PASSED 00:17:22.503 malloc 4194304 00:17:22.503 register 0x200000800000 6291456 00:17:22.503 buf 0x2000009fffc0 len 4194304 PASSED 00:17:22.503 free 0x2000004fffc0 3145728 00:17:22.503 free 0x2000004ffec0 64 00:17:22.503 unregister 0x200000400000 4194304 PASSED 00:17:22.503 free 0x2000009fffc0 4194304 00:17:22.503 unregister 0x200000800000 6291456 PASSED 00:17:22.503 malloc 8388608 00:17:22.503 register 0x200000400000 10485760 00:17:22.503 buf 0x2000005fffc0 len 8388608 PASSED 00:17:22.503 free 0x2000005fffc0 8388608 00:17:22.503 unregister 0x200000400000 10485760 PASSED 00:17:22.503 passed 00:17:22.503 00:17:22.503 Run Summary: Type Total Ran Passed Failed Inactive 00:17:22.503 suites 1 1 n/a 0 0 00:17:22.503 tests 1 1 1 0 0 00:17:22.503 asserts 15 15 15 0 n/a 00:17:22.503 00:17:22.503 Elapsed time = 0.041 seconds 00:17:22.503 00:17:22.503 real 0m0.209s 00:17:22.503 user 0m0.062s 00:17:22.503 sys 0m0.045s 00:17:22.503 20:15:17 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.503 20:15:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:17:22.503 ************************************ 00:17:22.503 END TEST env_mem_callbacks 00:17:22.503 ************************************ 00:17:22.503 00:17:22.503 real 0m6.228s 00:17:22.503 user 0m4.904s 00:17:22.503 sys 0m0.955s 00:17:22.503 20:15:17 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.503 ************************************ 00:17:22.503 END TEST env 00:17:22.503 ************************************ 00:17:22.503 20:15:17 env -- common/autotest_common.sh@10 -- # set +x 00:17:22.503 20:15:17 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:22.503 20:15:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:22.503 20:15:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.503 20:15:17 -- common/autotest_common.sh@10 -- # set +x 00:17:22.503 ************************************ 00:17:22.503 START TEST rpc 00:17:22.503 ************************************ 00:17:22.503 20:15:17 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:22.503 * Looking for test storage... 00:17:22.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:22.503 20:15:17 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:22.503 20:15:17 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:22.503 20:15:17 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:22.764 20:15:17 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:22.764 20:15:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.764 20:15:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.764 20:15:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.764 20:15:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.764 20:15:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.764 20:15:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.764 20:15:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.764 20:15:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.764 20:15:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.764 20:15:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.764 20:15:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.764 20:15:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:22.764 20:15:17 rpc -- scripts/common.sh@345 -- # : 1 00:17:22.764 20:15:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.764 20:15:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.764 20:15:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:17:22.764 20:15:17 rpc -- scripts/common.sh@353 -- # local d=1 00:17:22.764 20:15:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.764 20:15:17 rpc -- scripts/common.sh@355 -- # echo 1 00:17:22.764 20:15:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.764 20:15:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:17:22.764 20:15:17 rpc -- scripts/common.sh@353 -- # local d=2 00:17:22.764 20:15:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.764 20:15:17 rpc -- scripts/common.sh@355 -- # echo 2 00:17:22.764 20:15:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.764 20:15:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.764 20:15:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.764 20:15:17 rpc -- scripts/common.sh@368 -- # return 0 00:17:22.764 20:15:17 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.764 20:15:17 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.764 --rc genhtml_branch_coverage=1 00:17:22.764 --rc genhtml_function_coverage=1 00:17:22.764 --rc genhtml_legend=1 00:17:22.764 --rc geninfo_all_blocks=1 00:17:22.764 --rc geninfo_unexecuted_blocks=1 00:17:22.764 00:17:22.764 ' 00:17:22.764 20:15:17 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.764 --rc genhtml_branch_coverage=1 00:17:22.764 --rc genhtml_function_coverage=1 00:17:22.764 --rc genhtml_legend=1 00:17:22.764 --rc geninfo_all_blocks=1 00:17:22.764 --rc geninfo_unexecuted_blocks=1 00:17:22.764 00:17:22.764 ' 00:17:22.764 20:15:17 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.764 --rc genhtml_branch_coverage=1 00:17:22.764 --rc genhtml_function_coverage=1 00:17:22.764 --rc genhtml_legend=1 00:17:22.764 --rc geninfo_all_blocks=1 00:17:22.764 --rc geninfo_unexecuted_blocks=1 00:17:22.764 00:17:22.764 ' 00:17:22.764 20:15:17 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.764 --rc genhtml_branch_coverage=1 00:17:22.764 --rc genhtml_function_coverage=1 00:17:22.764 --rc genhtml_legend=1 00:17:22.764 --rc geninfo_all_blocks=1 00:17:22.764 --rc geninfo_unexecuted_blocks=1 00:17:22.764 00:17:22.764 ' 00:17:22.764 20:15:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57444 00:17:22.764 20:15:17 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:17:22.764 20:15:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:22.764 20:15:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57444 00:17:22.764 20:15:17 rpc -- common/autotest_common.sh@831 -- # '[' -z 57444 ']' 00:17:22.764 20:15:17 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.764 20:15:17 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.764 20:15:17 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.764 20:15:17 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.764 20:15:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.764 [2024-10-01 20:15:17.823494] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:17:22.764 [2024-10-01 20:15:17.823618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57444 ] 00:17:23.024 [2024-10-01 20:15:17.973922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.024 [2024-10-01 20:15:18.159837] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:17:23.025 [2024-10-01 20:15:18.159890] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57444' to capture a snapshot of events at runtime. 00:17:23.025 [2024-10-01 20:15:18.159900] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.025 [2024-10-01 20:15:18.159909] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.025 [2024-10-01 20:15:18.159916] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57444 for offline analysis/debug. 00:17:23.025 [2024-10-01 20:15:18.159949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.964 20:15:18 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:23.964 20:15:18 rpc -- common/autotest_common.sh@864 -- # return 0 00:17:23.964 20:15:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:23.964 20:15:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:23.964 20:15:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:17:23.964 20:15:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:17:23.964 20:15:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:23.964 20:15:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.964 20:15:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.964 ************************************ 00:17:23.964 START TEST rpc_integrity 00:17:23.964 ************************************ 00:17:23.964 20:15:18 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:17:23.964 20:15:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:23.964 20:15:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.965 20:15:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:23.965 20:15:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.965 20:15:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:23.965 20:15:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:17:23.965 20:15:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:23.965 20:15:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:23.965 20:15:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.965 20:15:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:23.965 20:15:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.965 20:15:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:17:23.965 20:15:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:23.965 20:15:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.965 20:15:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:23.965 { 00:17:23.965 "name": "Malloc0", 00:17:23.965 "aliases": [ 00:17:23.965 "4cccc5e5-291e-4636-a641-5c7a5df6d1e1" 00:17:23.965 ], 00:17:23.965 "product_name": "Malloc disk", 00:17:23.965 "block_size": 512, 00:17:23.965 "num_blocks": 16384, 00:17:23.965 "uuid": "4cccc5e5-291e-4636-a641-5c7a5df6d1e1", 00:17:23.965 "assigned_rate_limits": { 00:17:23.965 "rw_ios_per_sec": 0, 00:17:23.965 "rw_mbytes_per_sec": 0, 00:17:23.965 "r_mbytes_per_sec": 0, 00:17:23.965 "w_mbytes_per_sec": 0 00:17:23.965 }, 00:17:23.965 "claimed": false, 00:17:23.965 "zoned": false, 00:17:23.965 "supported_io_types": { 00:17:23.965 "read": true, 00:17:23.965 "write": true, 00:17:23.965 "unmap": true, 00:17:23.965 "flush": true, 00:17:23.965 "reset": true, 00:17:23.965 "nvme_admin": false, 00:17:23.965 "nvme_io": false, 00:17:23.965 "nvme_io_md": false, 00:17:23.965 "write_zeroes": true, 00:17:23.965 "zcopy": true, 00:17:23.965 "get_zone_info": false, 00:17:23.965 "zone_management": false, 00:17:23.965 "zone_append": false, 00:17:23.965 "compare": false, 00:17:23.965 "compare_and_write": false, 00:17:23.965 "abort": true, 00:17:23.965 "seek_hole": false, 00:17:23.965 "seek_data": false, 00:17:23.965 "copy": true, 00:17:23.965 "nvme_iov_md": false 00:17:23.965 }, 00:17:23.965 "memory_domains": [ 00:17:23.965 { 00:17:23.965 "dma_device_id": "system", 00:17:23.965 "dma_device_type": 1 00:17:23.965 }, 00:17:23.965 { 00:17:23.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.965 "dma_device_type": 2 00:17:23.965 } 00:17:23.965 ], 00:17:23.965 "driver_specific": {} 00:17:23.965 } 00:17:23.965 ]' 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:23.965 [2024-10-01 20:15:19.040398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:17:23.965 [2024-10-01 20:15:19.040493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.965 [2024-10-01 20:15:19.040528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:23.965 [2024-10-01 20:15:19.040546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.965 [2024-10-01 20:15:19.043817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.965 [2024-10-01 20:15:19.043880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:23.965 Passthru0 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:23.965 { 00:17:23.965 "name": "Malloc0", 00:17:23.965 "aliases": [ 00:17:23.965 "4cccc5e5-291e-4636-a641-5c7a5df6d1e1" 00:17:23.965 ], 00:17:23.965 "product_name": "Malloc disk", 00:17:23.965 "block_size": 512, 00:17:23.965 "num_blocks": 16384, 00:17:23.965 "uuid": "4cccc5e5-291e-4636-a641-5c7a5df6d1e1", 00:17:23.965 "assigned_rate_limits": { 00:17:23.965 "rw_ios_per_sec": 0, 00:17:23.965 "rw_mbytes_per_sec": 0, 00:17:23.965 "r_mbytes_per_sec": 0, 00:17:23.965 "w_mbytes_per_sec": 0 00:17:23.965 }, 00:17:23.965 "claimed": true, 00:17:23.965 "claim_type": "exclusive_write", 00:17:23.965 "zoned": false, 00:17:23.965 "supported_io_types": { 00:17:23.965 "read": true, 00:17:23.965 "write": true, 00:17:23.965 "unmap": true, 00:17:23.965 "flush": true, 00:17:23.965 "reset": true, 00:17:23.965 "nvme_admin": false, 00:17:23.965 "nvme_io": false, 00:17:23.965 "nvme_io_md": false, 00:17:23.965 "write_zeroes": true, 00:17:23.965 "zcopy": true, 00:17:23.965 "get_zone_info": false, 00:17:23.965 "zone_management": false, 00:17:23.965 "zone_append": false, 00:17:23.965 "compare": false, 00:17:23.965 "compare_and_write": false, 00:17:23.965 "abort": true, 00:17:23.965 "seek_hole": false, 00:17:23.965 "seek_data": false, 00:17:23.965 "copy": true, 00:17:23.965 "nvme_iov_md": false 00:17:23.965 }, 00:17:23.965 "memory_domains": [ 00:17:23.965 { 00:17:23.965 "dma_device_id": "system", 00:17:23.965 "dma_device_type": 1 00:17:23.965 }, 00:17:23.965 { 00:17:23.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.965 "dma_device_type": 2 00:17:23.965 } 00:17:23.965 ], 00:17:23.965 "driver_specific": {} 00:17:23.965 }, 00:17:23.965 { 00:17:23.965 "name": "Passthru0", 00:17:23.965 "aliases": [ 00:17:23.965 "fb369652-ff84-5be6-a4d2-c56be36b077d" 00:17:23.965 ], 00:17:23.965 "product_name": "passthru", 00:17:23.965 "block_size": 512, 00:17:23.965 "num_blocks": 16384, 00:17:23.965 "uuid": "fb369652-ff84-5be6-a4d2-c56be36b077d", 00:17:23.965 "assigned_rate_limits": { 00:17:23.965 "rw_ios_per_sec": 0, 00:17:23.965 "rw_mbytes_per_sec": 0, 00:17:23.965 "r_mbytes_per_sec": 0, 00:17:23.965 "w_mbytes_per_sec": 0 00:17:23.965 }, 00:17:23.965 "claimed": false, 00:17:23.965 "zoned": false, 00:17:23.965 "supported_io_types": { 00:17:23.965 "read": true, 00:17:23.965 "write": true, 00:17:23.965 "unmap": true, 00:17:23.965 "flush": true, 00:17:23.965 "reset": true, 00:17:23.965 "nvme_admin": false, 00:17:23.965 "nvme_io": false, 00:17:23.965 "nvme_io_md": false, 00:17:23.965 "write_zeroes": true, 00:17:23.965 "zcopy": true, 00:17:23.965 "get_zone_info": false, 00:17:23.965 "zone_management": false, 00:17:23.965 "zone_append": false, 00:17:23.965 "compare": false, 00:17:23.965 "compare_and_write": false, 00:17:23.965 "abort": true, 00:17:23.965 "seek_hole": false, 00:17:23.965 "seek_data": false, 00:17:23.965 "copy": true, 00:17:23.965 "nvme_iov_md": false 00:17:23.965 }, 00:17:23.965 "memory_domains": [ 00:17:23.965 { 00:17:23.965 "dma_device_id": "system", 00:17:23.965 "dma_device_type": 1 00:17:23.965 }, 00:17:23.965 { 00:17:23.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.965 "dma_device_type": 2 00:17:23.965 } 00:17:23.965 ], 00:17:23.965 "driver_specific": { 00:17:23.965 "passthru": { 00:17:23.965 "name": "Passthru0", 00:17:23.965 "base_bdev_name": "Malloc0" 00:17:23.965 } 00:17:23.965 } 00:17:23.965 } 00:17:23.965 ]' 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:23.965 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:17:23.965 ************************************ 00:17:23.965 END TEST rpc_integrity 00:17:23.965 ************************************ 00:17:23.965 20:15:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:23.965 00:17:23.965 real 0m0.234s 00:17:23.965 user 0m0.109s 00:17:23.966 sys 0m0.034s 00:17:23.966 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.966 20:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:24.224 20:15:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:17:24.224 20:15:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:24.224 20:15:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:24.224 20:15:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.224 ************************************ 00:17:24.224 START TEST rpc_plugins 00:17:24.224 ************************************ 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:17:24.224 20:15:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.224 20:15:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:17:24.224 20:15:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.224 20:15:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:17:24.224 { 00:17:24.224 "name": "Malloc1", 00:17:24.224 "aliases": [ 00:17:24.224 "ef644b1f-d5f5-43a5-826c-047bff8a8bd7" 00:17:24.224 ], 00:17:24.224 "product_name": "Malloc disk", 00:17:24.224 "block_size": 4096, 00:17:24.224 "num_blocks": 256, 00:17:24.224 "uuid": "ef644b1f-d5f5-43a5-826c-047bff8a8bd7", 00:17:24.224 "assigned_rate_limits": { 00:17:24.224 "rw_ios_per_sec": 0, 00:17:24.224 "rw_mbytes_per_sec": 0, 00:17:24.224 "r_mbytes_per_sec": 0, 00:17:24.224 "w_mbytes_per_sec": 0 00:17:24.224 }, 00:17:24.224 "claimed": false, 00:17:24.224 "zoned": false, 00:17:24.224 "supported_io_types": { 00:17:24.224 "read": true, 00:17:24.224 "write": true, 00:17:24.224 "unmap": true, 00:17:24.224 "flush": true, 00:17:24.224 "reset": true, 00:17:24.224 "nvme_admin": false, 00:17:24.224 "nvme_io": false, 00:17:24.224 "nvme_io_md": false, 00:17:24.224 "write_zeroes": true, 00:17:24.224 "zcopy": true, 00:17:24.224 "get_zone_info": false, 00:17:24.224 "zone_management": false, 00:17:24.224 "zone_append": false, 00:17:24.224 "compare": false, 00:17:24.224 "compare_and_write": false, 00:17:24.224 "abort": true, 00:17:24.224 "seek_hole": false, 00:17:24.224 "seek_data": false, 00:17:24.224 "copy": true, 00:17:24.224 "nvme_iov_md": false 00:17:24.224 }, 00:17:24.224 "memory_domains": [ 00:17:24.224 { 00:17:24.224 "dma_device_id": "system", 00:17:24.224 "dma_device_type": 1 00:17:24.224 }, 00:17:24.224 { 00:17:24.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.224 "dma_device_type": 2 00:17:24.224 } 00:17:24.224 ], 00:17:24.224 "driver_specific": {} 00:17:24.224 } 00:17:24.224 ]' 00:17:24.224 20:15:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:17:24.224 20:15:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:17:24.224 20:15:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.224 20:15:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.224 20:15:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:17:24.224 20:15:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:17:24.224 ************************************ 00:17:24.224 END TEST rpc_plugins 00:17:24.224 ************************************ 00:17:24.224 20:15:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:17:24.224 00:17:24.224 real 0m0.113s 00:17:24.224 user 0m0.064s 00:17:24.224 sys 0m0.018s 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:24.224 20:15:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:24.224 20:15:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:17:24.224 20:15:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:24.224 20:15:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:24.224 20:15:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.224 ************************************ 00:17:24.224 START TEST rpc_trace_cmd_test 00:17:24.224 ************************************ 00:17:24.224 20:15:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:17:24.224 20:15:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:17:24.224 20:15:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:17:24.224 20:15:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.224 20:15:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.224 20:15:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.224 20:15:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:17:24.224 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57444", 00:17:24.224 "tpoint_group_mask": "0x8", 00:17:24.224 "iscsi_conn": { 00:17:24.224 "mask": "0x2", 00:17:24.224 "tpoint_mask": "0x0" 00:17:24.224 }, 00:17:24.224 "scsi": { 00:17:24.224 "mask": "0x4", 00:17:24.224 "tpoint_mask": "0x0" 00:17:24.224 }, 00:17:24.224 "bdev": { 00:17:24.224 "mask": "0x8", 00:17:24.224 "tpoint_mask": "0xffffffffffffffff" 00:17:24.224 }, 00:17:24.224 "nvmf_rdma": { 00:17:24.224 "mask": "0x10", 00:17:24.224 "tpoint_mask": "0x0" 00:17:24.224 }, 00:17:24.224 "nvmf_tcp": { 00:17:24.224 "mask": "0x20", 00:17:24.224 "tpoint_mask": "0x0" 00:17:24.224 }, 00:17:24.224 "ftl": { 00:17:24.224 "mask": "0x40", 00:17:24.224 "tpoint_mask": "0x0" 00:17:24.224 }, 00:17:24.224 "blobfs": { 00:17:24.224 "mask": "0x80", 00:17:24.224 "tpoint_mask": "0x0" 00:17:24.224 }, 00:17:24.224 "dsa": { 00:17:24.224 "mask": "0x200", 00:17:24.224 "tpoint_mask": "0x0" 00:17:24.224 }, 00:17:24.224 "thread": { 00:17:24.224 "mask": "0x400", 00:17:24.224 "tpoint_mask": "0x0" 00:17:24.224 }, 00:17:24.224 "nvme_pcie": { 00:17:24.224 "mask": "0x800", 00:17:24.224 "tpoint_mask": "0x0" 00:17:24.224 }, 00:17:24.224 "iaa": { 00:17:24.224 "mask": "0x1000", 00:17:24.224 "tpoint_mask": "0x0" 00:17:24.224 }, 00:17:24.224 "nvme_tcp": { 00:17:24.224 "mask": "0x2000", 00:17:24.224 "tpoint_mask": "0x0" 00:17:24.224 }, 00:17:24.224 "bdev_nvme": { 00:17:24.224 "mask": "0x4000", 00:17:24.224 "tpoint_mask": "0x0" 00:17:24.224 }, 00:17:24.224 "sock": { 00:17:24.224 "mask": "0x8000", 00:17:24.224 "tpoint_mask": "0x0" 00:17:24.224 }, 00:17:24.224 "blob": { 00:17:24.224 "mask": "0x10000", 00:17:24.225 "tpoint_mask": "0x0" 00:17:24.225 }, 00:17:24.225 "bdev_raid": { 00:17:24.225 "mask": "0x20000", 00:17:24.225 "tpoint_mask": "0x0" 00:17:24.225 } 00:17:24.225 }' 00:17:24.225 20:15:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:17:24.225 20:15:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:17:24.225 20:15:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:17:24.482 20:15:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:17:24.482 20:15:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:17:24.482 20:15:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:17:24.482 20:15:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:17:24.482 20:15:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:17:24.482 20:15:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:17:24.482 ************************************ 00:17:24.482 END TEST rpc_trace_cmd_test 00:17:24.482 ************************************ 00:17:24.482 20:15:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:17:24.482 00:17:24.482 real 0m0.156s 00:17:24.482 user 0m0.124s 00:17:24.482 sys 0m0.021s 00:17:24.482 20:15:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:24.482 20:15:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.482 20:15:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:17:24.482 20:15:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:17:24.482 20:15:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:17:24.482 20:15:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:24.482 20:15:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:24.482 20:15:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.482 ************************************ 00:17:24.482 START TEST rpc_daemon_integrity 00:17:24.482 ************************************ 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:24.482 { 00:17:24.482 "name": "Malloc2", 00:17:24.482 "aliases": [ 00:17:24.482 "544a5219-5feb-4e93-b76c-2eebac867c14" 00:17:24.482 ], 00:17:24.482 "product_name": "Malloc disk", 00:17:24.482 "block_size": 512, 00:17:24.482 "num_blocks": 16384, 00:17:24.482 "uuid": "544a5219-5feb-4e93-b76c-2eebac867c14", 00:17:24.482 "assigned_rate_limits": { 00:17:24.482 "rw_ios_per_sec": 0, 00:17:24.482 "rw_mbytes_per_sec": 0, 00:17:24.482 "r_mbytes_per_sec": 0, 00:17:24.482 "w_mbytes_per_sec": 0 00:17:24.482 }, 00:17:24.482 "claimed": false, 00:17:24.482 "zoned": false, 00:17:24.482 "supported_io_types": { 00:17:24.482 "read": true, 00:17:24.482 "write": true, 00:17:24.482 "unmap": true, 00:17:24.482 "flush": true, 00:17:24.482 "reset": true, 00:17:24.482 "nvme_admin": false, 00:17:24.482 "nvme_io": false, 00:17:24.482 "nvme_io_md": false, 00:17:24.482 "write_zeroes": true, 00:17:24.482 "zcopy": true, 00:17:24.482 "get_zone_info": false, 00:17:24.482 "zone_management": false, 00:17:24.482 "zone_append": false, 00:17:24.482 "compare": false, 00:17:24.482 "compare_and_write": false, 00:17:24.482 "abort": true, 00:17:24.482 "seek_hole": false, 00:17:24.482 "seek_data": false, 00:17:24.482 "copy": true, 00:17:24.482 "nvme_iov_md": false 00:17:24.482 }, 00:17:24.482 "memory_domains": [ 00:17:24.482 { 00:17:24.482 "dma_device_id": "system", 00:17:24.482 "dma_device_type": 1 00:17:24.482 }, 00:17:24.482 { 00:17:24.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.482 "dma_device_type": 2 00:17:24.482 } 00:17:24.482 ], 00:17:24.482 "driver_specific": {} 00:17:24.482 } 00:17:24.482 ]' 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:24.482 [2024-10-01 20:15:19.661297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:17:24.482 [2024-10-01 20:15:19.661359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.482 [2024-10-01 20:15:19.661379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:24.482 [2024-10-01 20:15:19.661390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.482 [2024-10-01 20:15:19.663601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.482 [2024-10-01 20:15:19.663751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:24.482 Passthru0 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.482 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:24.482 { 00:17:24.482 "name": "Malloc2", 00:17:24.482 "aliases": [ 00:17:24.482 "544a5219-5feb-4e93-b76c-2eebac867c14" 00:17:24.482 ], 00:17:24.482 "product_name": "Malloc disk", 00:17:24.482 "block_size": 512, 00:17:24.482 "num_blocks": 16384, 00:17:24.482 "uuid": "544a5219-5feb-4e93-b76c-2eebac867c14", 00:17:24.482 "assigned_rate_limits": { 00:17:24.482 "rw_ios_per_sec": 0, 00:17:24.482 "rw_mbytes_per_sec": 0, 00:17:24.482 "r_mbytes_per_sec": 0, 00:17:24.482 "w_mbytes_per_sec": 0 00:17:24.482 }, 00:17:24.482 "claimed": true, 00:17:24.482 "claim_type": "exclusive_write", 00:17:24.482 "zoned": false, 00:17:24.482 "supported_io_types": { 00:17:24.482 "read": true, 00:17:24.482 "write": true, 00:17:24.482 "unmap": true, 00:17:24.482 "flush": true, 00:17:24.482 "reset": true, 00:17:24.482 "nvme_admin": false, 00:17:24.482 "nvme_io": false, 00:17:24.482 "nvme_io_md": false, 00:17:24.482 "write_zeroes": true, 00:17:24.482 "zcopy": true, 00:17:24.482 "get_zone_info": false, 00:17:24.482 "zone_management": false, 00:17:24.482 "zone_append": false, 00:17:24.482 "compare": false, 00:17:24.482 "compare_and_write": false, 00:17:24.482 "abort": true, 00:17:24.482 "seek_hole": false, 00:17:24.482 "seek_data": false, 00:17:24.482 "copy": true, 00:17:24.482 "nvme_iov_md": false 00:17:24.482 }, 00:17:24.482 "memory_domains": [ 00:17:24.482 { 00:17:24.482 "dma_device_id": "system", 00:17:24.482 "dma_device_type": 1 00:17:24.482 }, 00:17:24.482 { 00:17:24.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.482 "dma_device_type": 2 00:17:24.482 } 00:17:24.482 ], 00:17:24.482 "driver_specific": {} 00:17:24.482 }, 00:17:24.482 { 00:17:24.482 "name": "Passthru0", 00:17:24.482 "aliases": [ 00:17:24.482 "fdcf9988-d443-527a-9609-dfd094861f11" 00:17:24.482 ], 00:17:24.482 "product_name": "passthru", 00:17:24.482 "block_size": 512, 00:17:24.482 "num_blocks": 16384, 00:17:24.482 "uuid": "fdcf9988-d443-527a-9609-dfd094861f11", 00:17:24.482 "assigned_rate_limits": { 00:17:24.482 "rw_ios_per_sec": 0, 00:17:24.482 "rw_mbytes_per_sec": 0, 00:17:24.482 "r_mbytes_per_sec": 0, 00:17:24.482 "w_mbytes_per_sec": 0 00:17:24.482 }, 00:17:24.482 "claimed": false, 00:17:24.482 "zoned": false, 00:17:24.482 "supported_io_types": { 00:17:24.482 "read": true, 00:17:24.482 "write": true, 00:17:24.482 "unmap": true, 00:17:24.482 "flush": true, 00:17:24.482 "reset": true, 00:17:24.482 "nvme_admin": false, 00:17:24.482 "nvme_io": false, 00:17:24.482 "nvme_io_md": false, 00:17:24.482 "write_zeroes": true, 00:17:24.482 "zcopy": true, 00:17:24.482 "get_zone_info": false, 00:17:24.482 "zone_management": false, 00:17:24.482 "zone_append": false, 00:17:24.482 "compare": false, 00:17:24.482 "compare_and_write": false, 00:17:24.482 "abort": true, 00:17:24.482 "seek_hole": false, 00:17:24.482 "seek_data": false, 00:17:24.482 "copy": true, 00:17:24.482 "nvme_iov_md": false 00:17:24.482 }, 00:17:24.482 "memory_domains": [ 00:17:24.482 { 00:17:24.482 "dma_device_id": "system", 00:17:24.482 "dma_device_type": 1 00:17:24.482 }, 00:17:24.482 { 00:17:24.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.482 "dma_device_type": 2 00:17:24.482 } 00:17:24.482 ], 00:17:24.482 "driver_specific": { 00:17:24.482 "passthru": { 00:17:24.482 "name": "Passthru0", 00:17:24.482 "base_bdev_name": "Malloc2" 00:17:24.482 } 00:17:24.482 } 00:17:24.482 } 00:17:24.482 ]' 00:17:24.483 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:17:24.739 ************************************ 00:17:24.739 END TEST rpc_daemon_integrity 00:17:24.739 ************************************ 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:24.739 00:17:24.739 real 0m0.239s 00:17:24.739 user 0m0.132s 00:17:24.739 sys 0m0.023s 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:24.739 20:15:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:24.739 20:15:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:24.739 20:15:19 rpc -- rpc/rpc.sh@84 -- # killprocess 57444 00:17:24.739 20:15:19 rpc -- common/autotest_common.sh@950 -- # '[' -z 57444 ']' 00:17:24.739 20:15:19 rpc -- common/autotest_common.sh@954 -- # kill -0 57444 00:17:24.739 20:15:19 rpc -- common/autotest_common.sh@955 -- # uname 00:17:24.739 20:15:19 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.739 20:15:19 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57444 00:17:24.739 killing process with pid 57444 00:17:24.739 20:15:19 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:24.739 20:15:19 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:24.739 20:15:19 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57444' 00:17:24.739 20:15:19 rpc -- common/autotest_common.sh@969 -- # kill 57444 00:17:24.739 20:15:19 rpc -- common/autotest_common.sh@974 -- # wait 57444 00:17:27.262 00:17:27.262 real 0m4.308s 00:17:27.262 user 0m4.588s 00:17:27.262 sys 0m0.650s 00:17:27.262 20:15:21 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:27.262 20:15:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.262 ************************************ 00:17:27.262 END TEST rpc 00:17:27.262 ************************************ 00:17:27.262 20:15:21 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:27.262 20:15:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:27.262 20:15:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.262 20:15:21 -- common/autotest_common.sh@10 -- # set +x 00:17:27.262 ************************************ 00:17:27.262 START TEST skip_rpc 00:17:27.262 ************************************ 00:17:27.262 20:15:21 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:27.262 * Looking for test storage... 00:17:27.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:27.262 20:15:21 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:27.262 20:15:21 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:27.262 20:15:21 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:27.262 20:15:22 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@345 -- # : 1 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.262 20:15:22 skip_rpc -- scripts/common.sh@368 -- # return 0 00:17:27.262 20:15:22 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.262 20:15:22 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:27.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.262 --rc genhtml_branch_coverage=1 00:17:27.262 --rc genhtml_function_coverage=1 00:17:27.262 --rc genhtml_legend=1 00:17:27.262 --rc geninfo_all_blocks=1 00:17:27.262 --rc geninfo_unexecuted_blocks=1 00:17:27.262 00:17:27.262 ' 00:17:27.262 20:15:22 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:27.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.262 --rc genhtml_branch_coverage=1 00:17:27.262 --rc genhtml_function_coverage=1 00:17:27.262 --rc genhtml_legend=1 00:17:27.262 --rc geninfo_all_blocks=1 00:17:27.262 --rc geninfo_unexecuted_blocks=1 00:17:27.262 00:17:27.262 ' 00:17:27.262 20:15:22 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:27.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.262 --rc genhtml_branch_coverage=1 00:17:27.262 --rc genhtml_function_coverage=1 00:17:27.262 --rc genhtml_legend=1 00:17:27.262 --rc geninfo_all_blocks=1 00:17:27.262 --rc geninfo_unexecuted_blocks=1 00:17:27.262 00:17:27.262 ' 00:17:27.262 20:15:22 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:27.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.262 --rc genhtml_branch_coverage=1 00:17:27.262 --rc genhtml_function_coverage=1 00:17:27.262 --rc genhtml_legend=1 00:17:27.262 --rc geninfo_all_blocks=1 00:17:27.262 --rc geninfo_unexecuted_blocks=1 00:17:27.262 00:17:27.262 ' 00:17:27.262 20:15:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:27.262 20:15:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:27.262 20:15:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:17:27.262 20:15:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:27.262 20:15:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.262 20:15:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.262 ************************************ 00:17:27.262 START TEST skip_rpc 00:17:27.262 ************************************ 00:17:27.262 20:15:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:17:27.262 20:15:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57668 00:17:27.262 20:15:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:17:27.262 20:15:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:27.262 20:15:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:17:27.262 [2024-10-01 20:15:22.145291] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:17:27.262 [2024-10-01 20:15:22.145534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57668 ] 00:17:27.262 [2024-10-01 20:15:22.293763] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.519 [2024-10-01 20:15:22.480497] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57668 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57668 ']' 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57668 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57668 00:17:32.779 killing process with pid 57668 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57668' 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57668 00:17:32.779 20:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57668 00:17:33.713 ************************************ 00:17:33.713 END TEST skip_rpc 00:17:33.713 ************************************ 00:17:33.713 00:17:33.713 real 0m6.649s 00:17:33.713 user 0m6.196s 00:17:33.713 sys 0m0.328s 00:17:33.713 20:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:33.713 20:15:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.713 20:15:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:17:33.713 20:15:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:33.713 20:15:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:33.713 20:15:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.713 ************************************ 00:17:33.713 START TEST skip_rpc_with_json 00:17:33.713 ************************************ 00:17:33.713 20:15:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:17:33.713 20:15:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:17:33.713 20:15:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57766 00:17:33.713 20:15:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:33.713 20:15:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:33.713 20:15:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57766 00:17:33.713 20:15:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57766 ']' 00:17:33.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.713 20:15:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.713 20:15:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.713 20:15:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.713 20:15:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.713 20:15:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:33.713 [2024-10-01 20:15:28.840477] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:17:33.713 [2024-10-01 20:15:28.840613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57766 ] 00:17:33.971 [2024-10-01 20:15:28.985787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.971 [2024-10-01 20:15:29.143998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:34.923 [2024-10-01 20:15:29.821575] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:17:34.923 request: 00:17:34.923 { 00:17:34.923 "trtype": "tcp", 00:17:34.923 "method": "nvmf_get_transports", 00:17:34.923 "req_id": 1 00:17:34.923 } 00:17:34.923 Got JSON-RPC error response 00:17:34.923 response: 00:17:34.923 { 00:17:34.923 "code": -19, 00:17:34.923 "message": "No such device" 00:17:34.923 } 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:34.923 [2024-10-01 20:15:29.829700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.923 20:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:34.923 { 00:17:34.923 "subsystems": [ 00:17:34.923 { 00:17:34.923 "subsystem": "fsdev", 00:17:34.923 "config": [ 00:17:34.923 { 00:17:34.923 "method": "fsdev_set_opts", 00:17:34.923 "params": { 00:17:34.923 "fsdev_io_pool_size": 65535, 00:17:34.923 "fsdev_io_cache_size": 256 00:17:34.923 } 00:17:34.923 } 00:17:34.923 ] 00:17:34.923 }, 00:17:34.923 { 00:17:34.923 "subsystem": "keyring", 00:17:34.923 "config": [] 00:17:34.923 }, 00:17:34.923 { 00:17:34.923 "subsystem": "iobuf", 00:17:34.923 "config": [ 00:17:34.923 { 00:17:34.923 "method": "iobuf_set_options", 00:17:34.923 "params": { 00:17:34.923 "small_pool_count": 8192, 00:17:34.923 "large_pool_count": 1024, 00:17:34.923 "small_bufsize": 8192, 00:17:34.923 "large_bufsize": 135168 00:17:34.923 } 00:17:34.923 } 00:17:34.923 ] 00:17:34.923 }, 00:17:34.923 { 00:17:34.923 "subsystem": "sock", 00:17:34.923 "config": [ 00:17:34.923 { 00:17:34.923 "method": "sock_set_default_impl", 00:17:34.923 "params": { 00:17:34.923 "impl_name": "posix" 00:17:34.923 } 00:17:34.923 }, 00:17:34.923 { 00:17:34.923 "method": "sock_impl_set_options", 00:17:34.923 "params": { 00:17:34.923 "impl_name": "ssl", 00:17:34.923 "recv_buf_size": 4096, 00:17:34.923 "send_buf_size": 4096, 00:17:34.923 "enable_recv_pipe": true, 00:17:34.923 "enable_quickack": false, 00:17:34.923 "enable_placement_id": 0, 00:17:34.923 "enable_zerocopy_send_server": true, 00:17:34.923 "enable_zerocopy_send_client": false, 00:17:34.923 "zerocopy_threshold": 0, 00:17:34.923 "tls_version": 0, 00:17:34.923 "enable_ktls": false 00:17:34.923 } 00:17:34.923 }, 00:17:34.923 { 00:17:34.923 "method": "sock_impl_set_options", 00:17:34.923 "params": { 00:17:34.923 "impl_name": "posix", 00:17:34.923 "recv_buf_size": 2097152, 00:17:34.923 "send_buf_size": 2097152, 00:17:34.923 "enable_recv_pipe": true, 00:17:34.923 "enable_quickack": false, 00:17:34.923 "enable_placement_id": 0, 00:17:34.923 "enable_zerocopy_send_server": true, 00:17:34.923 "enable_zerocopy_send_client": false, 00:17:34.923 "zerocopy_threshold": 0, 00:17:34.923 "tls_version": 0, 00:17:34.923 "enable_ktls": false 00:17:34.923 } 00:17:34.923 } 00:17:34.923 ] 00:17:34.923 }, 00:17:34.923 { 00:17:34.923 "subsystem": "vmd", 00:17:34.923 "config": [] 00:17:34.923 }, 00:17:34.923 { 00:17:34.923 "subsystem": "accel", 00:17:34.923 "config": [ 00:17:34.923 { 00:17:34.923 "method": "accel_set_options", 00:17:34.923 "params": { 00:17:34.923 "small_cache_size": 128, 00:17:34.923 "large_cache_size": 16, 00:17:34.923 "task_count": 2048, 00:17:34.923 "sequence_count": 2048, 00:17:34.923 "buf_count": 2048 00:17:34.923 } 00:17:34.924 } 00:17:34.924 ] 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "subsystem": "bdev", 00:17:34.924 "config": [ 00:17:34.924 { 00:17:34.924 "method": "bdev_set_options", 00:17:34.924 "params": { 00:17:34.924 "bdev_io_pool_size": 65535, 00:17:34.924 "bdev_io_cache_size": 256, 00:17:34.924 "bdev_auto_examine": true, 00:17:34.924 "iobuf_small_cache_size": 128, 00:17:34.924 "iobuf_large_cache_size": 16, 00:17:34.924 "bdev_io_stack_size": 4096 00:17:34.924 } 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "method": "bdev_raid_set_options", 00:17:34.924 "params": { 00:17:34.924 "process_window_size_kb": 1024, 00:17:34.924 "process_max_bandwidth_mb_sec": 0 00:17:34.924 } 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "method": "bdev_iscsi_set_options", 00:17:34.924 "params": { 00:17:34.924 "timeout_sec": 30 00:17:34.924 } 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "method": "bdev_nvme_set_options", 00:17:34.924 "params": { 00:17:34.924 "action_on_timeout": "none", 00:17:34.924 "timeout_us": 0, 00:17:34.924 "timeout_admin_us": 0, 00:17:34.924 "keep_alive_timeout_ms": 10000, 00:17:34.924 "arbitration_burst": 0, 00:17:34.924 "low_priority_weight": 0, 00:17:34.924 "medium_priority_weight": 0, 00:17:34.924 "high_priority_weight": 0, 00:17:34.924 "nvme_adminq_poll_period_us": 10000, 00:17:34.924 "nvme_ioq_poll_period_us": 0, 00:17:34.924 "io_queue_requests": 0, 00:17:34.924 "delay_cmd_submit": true, 00:17:34.924 "transport_retry_count": 4, 00:17:34.924 "bdev_retry_count": 3, 00:17:34.924 "transport_ack_timeout": 0, 00:17:34.924 "ctrlr_loss_timeout_sec": 0, 00:17:34.924 "reconnect_delay_sec": 0, 00:17:34.924 "fast_io_fail_timeout_sec": 0, 00:17:34.924 "disable_auto_failback": false, 00:17:34.924 "generate_uuids": false, 00:17:34.924 "transport_tos": 0, 00:17:34.924 "nvme_error_stat": false, 00:17:34.924 "rdma_srq_size": 0, 00:17:34.924 "io_path_stat": false, 00:17:34.924 "allow_accel_sequence": false, 00:17:34.924 "rdma_max_cq_size": 0, 00:17:34.924 "rdma_cm_event_timeout_ms": 0, 00:17:34.924 "dhchap_digests": [ 00:17:34.924 "sha256", 00:17:34.924 "sha384", 00:17:34.924 "sha512" 00:17:34.924 ], 00:17:34.924 "dhchap_dhgroups": [ 00:17:34.924 "null", 00:17:34.924 "ffdhe2048", 00:17:34.924 "ffdhe3072", 00:17:34.924 "ffdhe4096", 00:17:34.924 "ffdhe6144", 00:17:34.924 "ffdhe8192" 00:17:34.924 ] 00:17:34.924 } 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "method": "bdev_nvme_set_hotplug", 00:17:34.924 "params": { 00:17:34.924 "period_us": 100000, 00:17:34.924 "enable": false 00:17:34.924 } 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "method": "bdev_wait_for_examine" 00:17:34.924 } 00:17:34.924 ] 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "subsystem": "scsi", 00:17:34.924 "config": null 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "subsystem": "scheduler", 00:17:34.924 "config": [ 00:17:34.924 { 00:17:34.924 "method": "framework_set_scheduler", 00:17:34.924 "params": { 00:17:34.924 "name": "static" 00:17:34.924 } 00:17:34.924 } 00:17:34.924 ] 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "subsystem": "vhost_scsi", 00:17:34.924 "config": [] 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "subsystem": "vhost_blk", 00:17:34.924 "config": [] 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "subsystem": "ublk", 00:17:34.924 "config": [] 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "subsystem": "nbd", 00:17:34.924 "config": [] 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "subsystem": "nvmf", 00:17:34.924 "config": [ 00:17:34.924 { 00:17:34.924 "method": "nvmf_set_config", 00:17:34.924 "params": { 00:17:34.924 "discovery_filter": "match_any", 00:17:34.924 "admin_cmd_passthru": { 00:17:34.924 "identify_ctrlr": false 00:17:34.924 }, 00:17:34.924 "dhchap_digests": [ 00:17:34.924 "sha256", 00:17:34.924 "sha384", 00:17:34.924 "sha512" 00:17:34.924 ], 00:17:34.924 "dhchap_dhgroups": [ 00:17:34.924 "null", 00:17:34.924 "ffdhe2048", 00:17:34.924 "ffdhe3072", 00:17:34.924 "ffdhe4096", 00:17:34.924 "ffdhe6144", 00:17:34.924 "ffdhe8192" 00:17:34.924 ] 00:17:34.924 } 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "method": "nvmf_set_max_subsystems", 00:17:34.924 "params": { 00:17:34.924 "max_subsystems": 1024 00:17:34.924 } 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "method": "nvmf_set_crdt", 00:17:34.924 "params": { 00:17:34.924 "crdt1": 0, 00:17:34.924 "crdt2": 0, 00:17:34.924 "crdt3": 0 00:17:34.924 } 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "method": "nvmf_create_transport", 00:17:34.924 "params": { 00:17:34.924 "trtype": "TCP", 00:17:34.924 "max_queue_depth": 128, 00:17:34.924 "max_io_qpairs_per_ctrlr": 127, 00:17:34.924 "in_capsule_data_size": 4096, 00:17:34.924 "max_io_size": 131072, 00:17:34.924 "io_unit_size": 131072, 00:17:34.924 "max_aq_depth": 128, 00:17:34.924 "num_shared_buffers": 511, 00:17:34.924 "buf_cache_size": 4294967295, 00:17:34.924 "dif_insert_or_strip": false, 00:17:34.924 "zcopy": false, 00:17:34.924 "c2h_success": true, 00:17:34.924 "sock_priority": 0, 00:17:34.924 "abort_timeout_sec": 1, 00:17:34.924 "ack_timeout": 0, 00:17:34.924 "data_wr_pool_size": 0 00:17:34.924 } 00:17:34.924 } 00:17:34.924 ] 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "subsystem": "iscsi", 00:17:34.924 "config": [ 00:17:34.925 { 00:17:34.925 "method": "iscsi_set_options", 00:17:34.925 "params": { 00:17:34.925 "node_base": "iqn.2016-06.io.spdk", 00:17:34.925 "max_sessions": 128, 00:17:34.925 "max_connections_per_session": 2, 00:17:34.925 "max_queue_depth": 64, 00:17:34.925 "default_time2wait": 2, 00:17:34.925 "default_time2retain": 20, 00:17:34.925 "first_burst_length": 8192, 00:17:34.925 "immediate_data": true, 00:17:34.925 "allow_duplicated_isid": false, 00:17:34.925 "error_recovery_level": 0, 00:17:34.925 "nop_timeout": 60, 00:17:34.925 "nop_in_interval": 30, 00:17:34.925 "disable_chap": false, 00:17:34.925 "require_chap": false, 00:17:34.925 "mutual_chap": false, 00:17:34.925 "chap_group": 0, 00:17:34.925 "max_large_datain_per_connection": 64, 00:17:34.925 "max_r2t_per_connection": 4, 00:17:34.925 "pdu_pool_size": 36864, 00:17:34.925 "immediate_data_pool_size": 16384, 00:17:34.925 "data_out_pool_size": 2048 00:17:34.925 } 00:17:34.925 } 00:17:34.925 ] 00:17:34.925 } 00:17:34.925 ] 00:17:34.925 } 00:17:34.925 20:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:34.925 20:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57766 00:17:34.925 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57766 ']' 00:17:34.925 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57766 00:17:34.925 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:17:34.925 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.925 20:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57766 00:17:34.925 killing process with pid 57766 00:17:34.925 20:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:34.925 20:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:34.925 20:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57766' 00:17:34.925 20:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57766 00:17:34.925 20:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57766 00:17:36.819 20:15:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57811 00:17:36.819 20:15:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:17:36.819 20:15:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:42.078 20:15:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57811 00:17:42.078 20:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57811 ']' 00:17:42.078 20:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57811 00:17:42.078 20:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:17:42.078 20:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:42.078 20:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57811 00:17:42.078 killing process with pid 57811 00:17:42.078 20:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:42.078 20:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:42.078 20:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57811' 00:17:42.078 20:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57811 00:17:42.078 20:15:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57811 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:43.975 ************************************ 00:17:43.975 END TEST skip_rpc_with_json 00:17:43.975 ************************************ 00:17:43.975 00:17:43.975 real 0m10.363s 00:17:43.975 user 0m9.741s 00:17:43.975 sys 0m0.776s 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:43.975 20:15:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:17:43.975 20:15:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:43.975 20:15:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:43.975 20:15:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.975 ************************************ 00:17:43.975 START TEST skip_rpc_with_delay 00:17:43.975 ************************************ 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:17:43.975 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:44.233 [2024-10-01 20:15:39.247148] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:17:44.233 [2024-10-01 20:15:39.247277] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:17:44.233 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:17:44.233 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:44.233 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:44.233 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:44.233 00:17:44.233 real 0m0.127s 00:17:44.233 user 0m0.069s 00:17:44.233 sys 0m0.057s 00:17:44.233 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.233 20:15:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:17:44.233 ************************************ 00:17:44.233 END TEST skip_rpc_with_delay 00:17:44.233 ************************************ 00:17:44.233 20:15:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:17:44.233 20:15:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:17:44.233 20:15:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:17:44.233 20:15:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:44.233 20:15:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:44.233 20:15:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.233 ************************************ 00:17:44.233 START TEST exit_on_failed_rpc_init 00:17:44.233 ************************************ 00:17:44.233 20:15:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:17:44.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.233 20:15:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57939 00:17:44.233 20:15:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57939 00:17:44.233 20:15:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57939 ']' 00:17:44.233 20:15:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.233 20:15:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.233 20:15:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:44.233 20:15:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.233 20:15:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.233 20:15:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:17:44.233 [2024-10-01 20:15:39.414514] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:17:44.233 [2024-10-01 20:15:39.414649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57939 ] 00:17:44.491 [2024-10-01 20:15:39.562963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.748 [2024-10-01 20:15:39.724887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:17:45.313 20:15:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:45.313 [2024-10-01 20:15:40.457478] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:17:45.313 [2024-10-01 20:15:40.457610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57957 ] 00:17:45.571 [2024-10-01 20:15:40.615163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.829 [2024-10-01 20:15:40.826749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.829 [2024-10-01 20:15:40.826842] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:45.829 [2024-10-01 20:15:40.826856] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:45.829 [2024-10-01 20:15:40.826866] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57939 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57939 ']' 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57939 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57939 00:17:46.086 killing process with pid 57939 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57939' 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57939 00:17:46.086 20:15:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57939 00:17:47.985 ************************************ 00:17:47.985 END TEST exit_on_failed_rpc_init 00:17:47.985 00:17:47.985 real 0m3.458s 00:17:47.985 user 0m3.917s 00:17:47.985 sys 0m0.495s 00:17:47.985 20:15:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:47.985 20:15:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.985 ************************************ 00:17:47.985 20:15:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:47.985 00:17:47.985 real 0m20.893s 00:17:47.985 user 0m20.058s 00:17:47.985 sys 0m1.814s 00:17:47.985 20:15:42 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:47.985 ************************************ 00:17:47.985 END TEST skip_rpc 00:17:47.985 ************************************ 00:17:47.985 20:15:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.985 20:15:42 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:47.985 20:15:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:47.985 20:15:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:47.985 20:15:42 -- common/autotest_common.sh@10 -- # set +x 00:17:47.985 ************************************ 00:17:47.985 START TEST rpc_client 00:17:47.985 ************************************ 00:17:47.985 20:15:42 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:47.985 * Looking for test storage... 00:17:47.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:17:47.985 20:15:42 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:47.985 20:15:42 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:17:47.985 20:15:42 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:47.985 20:15:42 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@345 -- # : 1 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@353 -- # local d=1 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.985 20:15:42 rpc_client -- scripts/common.sh@355 -- # echo 1 00:17:47.985 20:15:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.985 20:15:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:17:47.985 20:15:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:17:47.985 20:15:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.985 20:15:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:17:47.985 20:15:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.985 20:15:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.985 20:15:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.985 20:15:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:17:47.985 20:15:43 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.985 20:15:43 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:47.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.985 --rc genhtml_branch_coverage=1 00:17:47.985 --rc genhtml_function_coverage=1 00:17:47.985 --rc genhtml_legend=1 00:17:47.985 --rc geninfo_all_blocks=1 00:17:47.985 --rc geninfo_unexecuted_blocks=1 00:17:47.985 00:17:47.985 ' 00:17:47.985 20:15:43 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:47.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.985 --rc genhtml_branch_coverage=1 00:17:47.985 --rc genhtml_function_coverage=1 00:17:47.985 --rc genhtml_legend=1 00:17:47.985 --rc geninfo_all_blocks=1 00:17:47.985 --rc geninfo_unexecuted_blocks=1 00:17:47.985 00:17:47.985 ' 00:17:47.985 20:15:43 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:47.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.985 --rc genhtml_branch_coverage=1 00:17:47.985 --rc genhtml_function_coverage=1 00:17:47.985 --rc genhtml_legend=1 00:17:47.985 --rc geninfo_all_blocks=1 00:17:47.985 --rc geninfo_unexecuted_blocks=1 00:17:47.985 00:17:47.985 ' 00:17:47.985 20:15:43 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:47.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.985 --rc genhtml_branch_coverage=1 00:17:47.985 --rc genhtml_function_coverage=1 00:17:47.985 --rc genhtml_legend=1 00:17:47.985 --rc geninfo_all_blocks=1 00:17:47.985 --rc geninfo_unexecuted_blocks=1 00:17:47.985 00:17:47.985 ' 00:17:47.985 20:15:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:17:47.985 OK 00:17:47.985 20:15:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:17:47.985 00:17:47.985 real 0m0.194s 00:17:47.985 user 0m0.123s 00:17:47.985 sys 0m0.079s 00:17:47.985 20:15:43 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:47.985 20:15:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:17:47.985 ************************************ 00:17:47.985 END TEST rpc_client 00:17:47.985 ************************************ 00:17:47.985 20:15:43 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:47.985 20:15:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:47.985 20:15:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:47.985 20:15:43 -- common/autotest_common.sh@10 -- # set +x 00:17:47.985 ************************************ 00:17:47.985 START TEST json_config 00:17:47.985 ************************************ 00:17:47.985 20:15:43 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:47.985 20:15:43 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:47.985 20:15:43 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:17:47.985 20:15:43 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:48.245 20:15:43 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:48.245 20:15:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.245 20:15:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.245 20:15:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.245 20:15:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.245 20:15:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.245 20:15:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.245 20:15:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.245 20:15:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.245 20:15:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.245 20:15:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.245 20:15:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.245 20:15:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:17:48.246 20:15:43 json_config -- scripts/common.sh@345 -- # : 1 00:17:48.246 20:15:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.246 20:15:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.246 20:15:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:17:48.246 20:15:43 json_config -- scripts/common.sh@353 -- # local d=1 00:17:48.246 20:15:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.246 20:15:43 json_config -- scripts/common.sh@355 -- # echo 1 00:17:48.246 20:15:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.246 20:15:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:17:48.246 20:15:43 json_config -- scripts/common.sh@353 -- # local d=2 00:17:48.246 20:15:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.246 20:15:43 json_config -- scripts/common.sh@355 -- # echo 2 00:17:48.246 20:15:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.246 20:15:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.246 20:15:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.246 20:15:43 json_config -- scripts/common.sh@368 -- # return 0 00:17:48.246 20:15:43 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.246 20:15:43 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:48.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.246 --rc genhtml_branch_coverage=1 00:17:48.246 --rc genhtml_function_coverage=1 00:17:48.246 --rc genhtml_legend=1 00:17:48.246 --rc geninfo_all_blocks=1 00:17:48.246 --rc geninfo_unexecuted_blocks=1 00:17:48.246 00:17:48.246 ' 00:17:48.246 20:15:43 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:48.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.246 --rc genhtml_branch_coverage=1 00:17:48.246 --rc genhtml_function_coverage=1 00:17:48.246 --rc genhtml_legend=1 00:17:48.246 --rc geninfo_all_blocks=1 00:17:48.246 --rc geninfo_unexecuted_blocks=1 00:17:48.246 00:17:48.246 ' 00:17:48.246 20:15:43 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:48.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.246 --rc genhtml_branch_coverage=1 00:17:48.246 --rc genhtml_function_coverage=1 00:17:48.246 --rc genhtml_legend=1 00:17:48.246 --rc geninfo_all_blocks=1 00:17:48.246 --rc geninfo_unexecuted_blocks=1 00:17:48.246 00:17:48.246 ' 00:17:48.246 20:15:43 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:48.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.246 --rc genhtml_branch_coverage=1 00:17:48.246 --rc genhtml_function_coverage=1 00:17:48.246 --rc genhtml_legend=1 00:17:48.246 --rc geninfo_all_blocks=1 00:17:48.246 --rc geninfo_unexecuted_blocks=1 00:17:48.246 00:17:48.246 ' 00:17:48.246 20:15:43 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:aebd319b-9926-43bc-9bfe-64775317188f 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=aebd319b-9926-43bc-9bfe-64775317188f 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:48.246 20:15:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:17:48.246 20:15:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.246 20:15:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.246 20:15:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.246 20:15:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.246 20:15:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.246 20:15:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.246 20:15:43 json_config -- paths/export.sh@5 -- # export PATH 00:17:48.246 20:15:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@51 -- # : 0 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:48.246 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:48.246 20:15:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:48.246 20:15:43 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:48.246 20:15:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:17:48.246 20:15:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:17:48.246 20:15:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:17:48.246 20:15:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:17:48.246 20:15:43 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:17:48.246 WARNING: No tests are enabled so not running JSON configuration tests 00:17:48.246 20:15:43 json_config -- json_config/json_config.sh@28 -- # exit 0 00:17:48.246 00:17:48.246 real 0m0.134s 00:17:48.246 user 0m0.081s 00:17:48.246 sys 0m0.055s 00:17:48.246 20:15:43 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:48.246 20:15:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:48.246 ************************************ 00:17:48.246 END TEST json_config 00:17:48.246 ************************************ 00:17:48.246 20:15:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:48.246 20:15:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:48.246 20:15:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:48.246 20:15:43 -- common/autotest_common.sh@10 -- # set +x 00:17:48.246 ************************************ 00:17:48.246 START TEST json_config_extra_key 00:17:48.246 ************************************ 00:17:48.246 20:15:43 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:48.246 20:15:43 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:48.246 20:15:43 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:17:48.246 20:15:43 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:48.246 20:15:43 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:48.246 20:15:43 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.246 20:15:43 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.246 20:15:43 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.246 20:15:43 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.246 20:15:43 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.246 20:15:43 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.246 20:15:43 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.246 20:15:43 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.246 20:15:43 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.246 20:15:43 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.246 20:15:43 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.246 20:15:43 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:17:48.247 20:15:43 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.247 20:15:43 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:48.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.247 --rc genhtml_branch_coverage=1 00:17:48.247 --rc genhtml_function_coverage=1 00:17:48.247 --rc genhtml_legend=1 00:17:48.247 --rc geninfo_all_blocks=1 00:17:48.247 --rc geninfo_unexecuted_blocks=1 00:17:48.247 00:17:48.247 ' 00:17:48.247 20:15:43 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:48.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.247 --rc genhtml_branch_coverage=1 00:17:48.247 --rc genhtml_function_coverage=1 00:17:48.247 --rc genhtml_legend=1 00:17:48.247 --rc geninfo_all_blocks=1 00:17:48.247 --rc geninfo_unexecuted_blocks=1 00:17:48.247 00:17:48.247 ' 00:17:48.247 20:15:43 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:48.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.247 --rc genhtml_branch_coverage=1 00:17:48.247 --rc genhtml_function_coverage=1 00:17:48.247 --rc genhtml_legend=1 00:17:48.247 --rc geninfo_all_blocks=1 00:17:48.247 --rc geninfo_unexecuted_blocks=1 00:17:48.247 00:17:48.247 ' 00:17:48.247 20:15:43 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:48.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.247 --rc genhtml_branch_coverage=1 00:17:48.247 --rc genhtml_function_coverage=1 00:17:48.247 --rc genhtml_legend=1 00:17:48.247 --rc geninfo_all_blocks=1 00:17:48.247 --rc geninfo_unexecuted_blocks=1 00:17:48.247 00:17:48.247 ' 00:17:48.247 20:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:aebd319b-9926-43bc-9bfe-64775317188f 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=aebd319b-9926-43bc-9bfe-64775317188f 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.247 20:15:43 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.247 20:15:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.247 20:15:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.247 20:15:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.247 20:15:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:17:48.247 20:15:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:48.247 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:48.247 20:15:43 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:48.247 20:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:48.247 20:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:17:48.247 20:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:17:48.247 20:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:17:48.247 20:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:17:48.247 20:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1536') 00:17:48.247 20:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:17:48.247 20:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:17:48.247 20:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:17:48.247 20:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:17:48.247 20:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:17:48.247 INFO: launching applications... 00:17:48.247 20:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:48.247 20:15:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:17:48.247 20:15:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:17:48.247 20:15:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:48.247 20:15:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:48.247 20:15:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:17:48.247 20:15:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:48.247 20:15:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:48.247 20:15:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58156 00:17:48.247 20:15:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:48.247 Waiting for target to run... 00:17:48.247 20:15:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58156 /var/tmp/spdk_tgt.sock 00:17:48.247 20:15:43 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58156 ']' 00:17:48.247 20:15:43 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1536 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:48.247 20:15:43 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:48.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:48.248 20:15:43 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:48.248 20:15:43 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:48.248 20:15:43 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:48.248 20:15:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:48.506 [2024-10-01 20:15:43.500429] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:17:48.506 [2024-10-01 20:15:43.500716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1536 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58156 ] 00:17:48.764 [2024-10-01 20:15:43.909741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.023 [2024-10-01 20:15:44.102858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.588 00:17:49.588 INFO: shutting down applications... 00:17:49.588 20:15:44 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:49.588 20:15:44 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:17:49.588 20:15:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:17:49.588 20:15:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:17:49.588 20:15:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:17:49.588 20:15:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:17:49.588 20:15:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:17:49.588 20:15:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58156 ]] 00:17:49.588 20:15:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58156 00:17:49.588 20:15:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:17:49.588 20:15:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:49.588 20:15:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58156 00:17:49.588 20:15:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:50.153 20:15:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:50.153 20:15:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:50.153 20:15:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58156 00:17:50.153 20:15:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:50.719 20:15:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:50.719 20:15:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:50.719 20:15:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58156 00:17:50.719 20:15:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:51.281 20:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:51.281 20:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:51.282 20:15:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58156 00:17:51.282 20:15:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:51.553 20:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:51.553 20:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:51.553 20:15:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58156 00:17:51.553 20:15:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:52.143 20:15:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:52.143 20:15:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:52.143 20:15:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58156 00:17:52.143 20:15:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:52.708 20:15:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:52.708 20:15:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:52.708 SPDK target shutdown done 00:17:52.708 Success 00:17:52.708 20:15:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58156 00:17:52.708 20:15:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:17:52.708 20:15:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:17:52.708 20:15:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:17:52.708 20:15:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:17:52.708 20:15:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:17:52.708 00:17:52.708 real 0m4.493s 00:17:52.708 user 0m3.875s 00:17:52.708 sys 0m0.536s 00:17:52.708 ************************************ 00:17:52.708 END TEST json_config_extra_key 00:17:52.708 ************************************ 00:17:52.708 20:15:47 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.709 20:15:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:52.709 20:15:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:52.709 20:15:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:52.709 20:15:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.709 20:15:47 -- common/autotest_common.sh@10 -- # set +x 00:17:52.709 ************************************ 00:17:52.709 START TEST alias_rpc 00:17:52.709 ************************************ 00:17:52.709 20:15:47 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:52.709 * Looking for test storage... 00:17:52.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:17:52.709 20:15:47 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:52.709 20:15:47 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:52.709 20:15:47 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:52.709 20:15:47 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.709 20:15:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.966 20:15:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:52.966 20:15:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:17:52.966 20:15:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.966 20:15:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:17:52.966 20:15:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.966 20:15:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:52.966 20:15:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:17:52.966 20:15:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.966 20:15:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:17:52.967 20:15:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.967 20:15:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.967 20:15:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.967 20:15:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:17:52.967 20:15:47 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.967 20:15:47 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:52.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.967 --rc genhtml_branch_coverage=1 00:17:52.967 --rc genhtml_function_coverage=1 00:17:52.967 --rc genhtml_legend=1 00:17:52.967 --rc geninfo_all_blocks=1 00:17:52.967 --rc geninfo_unexecuted_blocks=1 00:17:52.967 00:17:52.967 ' 00:17:52.967 20:15:47 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:52.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.967 --rc genhtml_branch_coverage=1 00:17:52.967 --rc genhtml_function_coverage=1 00:17:52.967 --rc genhtml_legend=1 00:17:52.967 --rc geninfo_all_blocks=1 00:17:52.967 --rc geninfo_unexecuted_blocks=1 00:17:52.967 00:17:52.967 ' 00:17:52.967 20:15:47 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:52.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.967 --rc genhtml_branch_coverage=1 00:17:52.967 --rc genhtml_function_coverage=1 00:17:52.967 --rc genhtml_legend=1 00:17:52.967 --rc geninfo_all_blocks=1 00:17:52.967 --rc geninfo_unexecuted_blocks=1 00:17:52.967 00:17:52.967 ' 00:17:52.967 20:15:47 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:52.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.967 --rc genhtml_branch_coverage=1 00:17:52.967 --rc genhtml_function_coverage=1 00:17:52.967 --rc genhtml_legend=1 00:17:52.967 --rc geninfo_all_blocks=1 00:17:52.967 --rc geninfo_unexecuted_blocks=1 00:17:52.967 00:17:52.967 ' 00:17:52.967 20:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:52.967 20:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58268 00:17:52.967 20:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58268 00:17:52.967 20:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.967 20:15:47 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58268 ']' 00:17:52.967 20:15:47 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.967 20:15:47 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.967 20:15:47 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.967 20:15:47 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.967 20:15:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.967 [2024-10-01 20:15:48.004078] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:17:52.967 [2024-10-01 20:15:48.004826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58268 ] 00:17:52.967 [2024-10-01 20:15:48.152625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.224 [2024-10-01 20:15:48.357764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.157 20:15:49 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:54.157 20:15:49 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:54.157 20:15:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:17:54.414 20:15:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58268 00:17:54.414 20:15:49 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58268 ']' 00:17:54.414 20:15:49 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58268 00:17:54.414 20:15:49 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:17:54.414 20:15:49 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:54.414 20:15:49 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58268 00:17:54.414 killing process with pid 58268 00:17:54.414 20:15:49 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:54.414 20:15:49 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:54.414 20:15:49 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58268' 00:17:54.414 20:15:49 alias_rpc -- common/autotest_common.sh@969 -- # kill 58268 00:17:54.414 20:15:49 alias_rpc -- common/autotest_common.sh@974 -- # wait 58268 00:17:56.313 ************************************ 00:17:56.313 END TEST alias_rpc 00:17:56.313 ************************************ 00:17:56.313 00:17:56.313 real 0m3.627s 00:17:56.313 user 0m3.660s 00:17:56.313 sys 0m0.474s 00:17:56.313 20:15:51 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:56.313 20:15:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.313 20:15:51 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:17:56.313 20:15:51 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:17:56.313 20:15:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:56.313 20:15:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:56.313 20:15:51 -- common/autotest_common.sh@10 -- # set +x 00:17:56.313 ************************************ 00:17:56.313 START TEST spdkcli_tcp 00:17:56.313 ************************************ 00:17:56.313 20:15:51 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:17:56.313 * Looking for test storage... 00:17:56.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:56.313 20:15:51 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:56.313 20:15:51 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:17:56.571 20:15:51 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:56.571 20:15:51 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:56.571 20:15:51 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:17:56.571 20:15:51 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:56.571 20:15:51 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:56.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.571 --rc genhtml_branch_coverage=1 00:17:56.571 --rc genhtml_function_coverage=1 00:17:56.571 --rc genhtml_legend=1 00:17:56.571 --rc geninfo_all_blocks=1 00:17:56.571 --rc geninfo_unexecuted_blocks=1 00:17:56.571 00:17:56.571 ' 00:17:56.571 20:15:51 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:56.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.571 --rc genhtml_branch_coverage=1 00:17:56.571 --rc genhtml_function_coverage=1 00:17:56.571 --rc genhtml_legend=1 00:17:56.571 --rc geninfo_all_blocks=1 00:17:56.571 --rc geninfo_unexecuted_blocks=1 00:17:56.571 00:17:56.571 ' 00:17:56.571 20:15:51 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:56.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.571 --rc genhtml_branch_coverage=1 00:17:56.571 --rc genhtml_function_coverage=1 00:17:56.571 --rc genhtml_legend=1 00:17:56.571 --rc geninfo_all_blocks=1 00:17:56.571 --rc geninfo_unexecuted_blocks=1 00:17:56.571 00:17:56.571 ' 00:17:56.571 20:15:51 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:56.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.571 --rc genhtml_branch_coverage=1 00:17:56.571 --rc genhtml_function_coverage=1 00:17:56.571 --rc genhtml_legend=1 00:17:56.571 --rc geninfo_all_blocks=1 00:17:56.571 --rc geninfo_unexecuted_blocks=1 00:17:56.571 00:17:56.571 ' 00:17:56.571 20:15:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:56.572 20:15:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:56.572 20:15:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:56.572 20:15:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:17:56.572 20:15:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:17:56.572 20:15:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.572 20:15:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:17:56.572 20:15:51 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:56.572 20:15:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:56.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.572 20:15:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58369 00:17:56.572 20:15:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58369 00:17:56.572 20:15:51 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58369 ']' 00:17:56.572 20:15:51 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.572 20:15:51 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:56.572 20:15:51 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.572 20:15:51 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:56.572 20:15:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:56.572 20:15:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:56.572 [2024-10-01 20:15:51.666585] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:17:56.572 [2024-10-01 20:15:51.666875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58369 ] 00:17:56.830 [2024-10-01 20:15:51.817802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:56.830 [2024-10-01 20:15:52.033323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.830 [2024-10-01 20:15:52.033431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.763 20:15:52 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.763 20:15:52 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:17:57.763 20:15:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:17:57.763 20:15:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58392 00:17:57.763 20:15:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:17:58.021 [ 00:17:58.021 "bdev_malloc_delete", 00:17:58.021 "bdev_malloc_create", 00:17:58.021 "bdev_null_resize", 00:17:58.021 "bdev_null_delete", 00:17:58.021 "bdev_null_create", 00:17:58.021 "bdev_nvme_cuse_unregister", 00:17:58.021 "bdev_nvme_cuse_register", 00:17:58.021 "bdev_opal_new_user", 00:17:58.021 "bdev_opal_set_lock_state", 00:17:58.021 "bdev_opal_delete", 00:17:58.021 "bdev_opal_get_info", 00:17:58.021 "bdev_opal_create", 00:17:58.021 "bdev_nvme_opal_revert", 00:17:58.021 "bdev_nvme_opal_init", 00:17:58.021 "bdev_nvme_send_cmd", 00:17:58.021 "bdev_nvme_set_keys", 00:17:58.021 "bdev_nvme_get_path_iostat", 00:17:58.021 "bdev_nvme_get_mdns_discovery_info", 00:17:58.021 "bdev_nvme_stop_mdns_discovery", 00:17:58.021 "bdev_nvme_start_mdns_discovery", 00:17:58.021 "bdev_nvme_set_multipath_policy", 00:17:58.021 "bdev_nvme_set_preferred_path", 00:17:58.021 "bdev_nvme_get_io_paths", 00:17:58.021 "bdev_nvme_remove_error_injection", 00:17:58.021 "bdev_nvme_add_error_injection", 00:17:58.021 "bdev_nvme_get_discovery_info", 00:17:58.021 "bdev_nvme_stop_discovery", 00:17:58.021 "bdev_nvme_start_discovery", 00:17:58.021 "bdev_nvme_get_controller_health_info", 00:17:58.021 "bdev_nvme_disable_controller", 00:17:58.021 "bdev_nvme_enable_controller", 00:17:58.021 "bdev_nvme_reset_controller", 00:17:58.021 "bdev_nvme_get_transport_statistics", 00:17:58.021 "bdev_nvme_apply_firmware", 00:17:58.021 "bdev_nvme_detach_controller", 00:17:58.021 "bdev_nvme_get_controllers", 00:17:58.021 "bdev_nvme_attach_controller", 00:17:58.021 "bdev_nvme_set_hotplug", 00:17:58.021 "bdev_nvme_set_options", 00:17:58.021 "bdev_passthru_delete", 00:17:58.021 "bdev_passthru_create", 00:17:58.021 "bdev_lvol_set_parent_bdev", 00:17:58.021 "bdev_lvol_set_parent", 00:17:58.021 "bdev_lvol_check_shallow_copy", 00:17:58.021 "bdev_lvol_start_shallow_copy", 00:17:58.021 "bdev_lvol_grow_lvstore", 00:17:58.021 "bdev_lvol_get_lvols", 00:17:58.021 "bdev_lvol_get_lvstores", 00:17:58.021 "bdev_lvol_delete", 00:17:58.021 "bdev_lvol_set_read_only", 00:17:58.021 "bdev_lvol_resize", 00:17:58.021 "bdev_lvol_decouple_parent", 00:17:58.021 "bdev_lvol_inflate", 00:17:58.021 "bdev_lvol_rename", 00:17:58.021 "bdev_lvol_clone_bdev", 00:17:58.021 "bdev_lvol_clone", 00:17:58.021 "bdev_lvol_snapshot", 00:17:58.021 "bdev_lvol_create", 00:17:58.021 "bdev_lvol_delete_lvstore", 00:17:58.021 "bdev_lvol_rename_lvstore", 00:17:58.021 "bdev_lvol_create_lvstore", 00:17:58.021 "bdev_raid_set_options", 00:17:58.021 "bdev_raid_remove_base_bdev", 00:17:58.021 "bdev_raid_add_base_bdev", 00:17:58.021 "bdev_raid_delete", 00:17:58.021 "bdev_raid_create", 00:17:58.021 "bdev_raid_get_bdevs", 00:17:58.021 "bdev_error_inject_error", 00:17:58.021 "bdev_error_delete", 00:17:58.021 "bdev_error_create", 00:17:58.021 "bdev_split_delete", 00:17:58.021 "bdev_split_create", 00:17:58.021 "bdev_delay_delete", 00:17:58.021 "bdev_delay_create", 00:17:58.021 "bdev_delay_update_latency", 00:17:58.021 "bdev_zone_block_delete", 00:17:58.021 "bdev_zone_block_create", 00:17:58.021 "blobfs_create", 00:17:58.021 "blobfs_detect", 00:17:58.021 "blobfs_set_cache_size", 00:17:58.021 "bdev_xnvme_delete", 00:17:58.021 "bdev_xnvme_create", 00:17:58.021 "bdev_aio_delete", 00:17:58.021 "bdev_aio_rescan", 00:17:58.021 "bdev_aio_create", 00:17:58.021 "bdev_ftl_set_property", 00:17:58.021 "bdev_ftl_get_properties", 00:17:58.021 "bdev_ftl_get_stats", 00:17:58.021 "bdev_ftl_unmap", 00:17:58.021 "bdev_ftl_unload", 00:17:58.021 "bdev_ftl_delete", 00:17:58.021 "bdev_ftl_load", 00:17:58.021 "bdev_ftl_create", 00:17:58.021 "bdev_virtio_attach_controller", 00:17:58.021 "bdev_virtio_scsi_get_devices", 00:17:58.021 "bdev_virtio_detach_controller", 00:17:58.021 "bdev_virtio_blk_set_hotplug", 00:17:58.021 "bdev_iscsi_delete", 00:17:58.021 "bdev_iscsi_create", 00:17:58.021 "bdev_iscsi_set_options", 00:17:58.021 "accel_error_inject_error", 00:17:58.021 "ioat_scan_accel_module", 00:17:58.021 "dsa_scan_accel_module", 00:17:58.021 "iaa_scan_accel_module", 00:17:58.021 "keyring_file_remove_key", 00:17:58.021 "keyring_file_add_key", 00:17:58.021 "keyring_linux_set_options", 00:17:58.021 "fsdev_aio_delete", 00:17:58.021 "fsdev_aio_create", 00:17:58.021 "iscsi_get_histogram", 00:17:58.021 "iscsi_enable_histogram", 00:17:58.021 "iscsi_set_options", 00:17:58.021 "iscsi_get_auth_groups", 00:17:58.021 "iscsi_auth_group_remove_secret", 00:17:58.021 "iscsi_auth_group_add_secret", 00:17:58.021 "iscsi_delete_auth_group", 00:17:58.021 "iscsi_create_auth_group", 00:17:58.021 "iscsi_set_discovery_auth", 00:17:58.021 "iscsi_get_options", 00:17:58.021 "iscsi_target_node_request_logout", 00:17:58.021 "iscsi_target_node_set_redirect", 00:17:58.021 "iscsi_target_node_set_auth", 00:17:58.021 "iscsi_target_node_add_lun", 00:17:58.021 "iscsi_get_stats", 00:17:58.021 "iscsi_get_connections", 00:17:58.021 "iscsi_portal_group_set_auth", 00:17:58.021 "iscsi_start_portal_group", 00:17:58.021 "iscsi_delete_portal_group", 00:17:58.021 "iscsi_create_portal_group", 00:17:58.021 "iscsi_get_portal_groups", 00:17:58.021 "iscsi_delete_target_node", 00:17:58.021 "iscsi_target_node_remove_pg_ig_maps", 00:17:58.021 "iscsi_target_node_add_pg_ig_maps", 00:17:58.021 "iscsi_create_target_node", 00:17:58.022 "iscsi_get_target_nodes", 00:17:58.022 "iscsi_delete_initiator_group", 00:17:58.022 "iscsi_initiator_group_remove_initiators", 00:17:58.022 "iscsi_initiator_group_add_initiators", 00:17:58.022 "iscsi_create_initiator_group", 00:17:58.022 "iscsi_get_initiator_groups", 00:17:58.022 "nvmf_set_crdt", 00:17:58.022 "nvmf_set_config", 00:17:58.022 "nvmf_set_max_subsystems", 00:17:58.022 "nvmf_stop_mdns_prr", 00:17:58.022 "nvmf_publish_mdns_prr", 00:17:58.022 "nvmf_subsystem_get_listeners", 00:17:58.022 "nvmf_subsystem_get_qpairs", 00:17:58.022 "nvmf_subsystem_get_controllers", 00:17:58.022 "nvmf_get_stats", 00:17:58.022 "nvmf_get_transports", 00:17:58.022 "nvmf_create_transport", 00:17:58.022 "nvmf_get_targets", 00:17:58.022 "nvmf_delete_target", 00:17:58.022 "nvmf_create_target", 00:17:58.022 "nvmf_subsystem_allow_any_host", 00:17:58.022 "nvmf_subsystem_set_keys", 00:17:58.022 "nvmf_subsystem_remove_host", 00:17:58.022 "nvmf_subsystem_add_host", 00:17:58.022 "nvmf_ns_remove_host", 00:17:58.022 "nvmf_ns_add_host", 00:17:58.022 "nvmf_subsystem_remove_ns", 00:17:58.022 "nvmf_subsystem_set_ns_ana_group", 00:17:58.022 "nvmf_subsystem_add_ns", 00:17:58.022 "nvmf_subsystem_listener_set_ana_state", 00:17:58.022 "nvmf_discovery_get_referrals", 00:17:58.022 "nvmf_discovery_remove_referral", 00:17:58.022 "nvmf_discovery_add_referral", 00:17:58.022 "nvmf_subsystem_remove_listener", 00:17:58.022 "nvmf_subsystem_add_listener", 00:17:58.022 "nvmf_delete_subsystem", 00:17:58.022 "nvmf_create_subsystem", 00:17:58.022 "nvmf_get_subsystems", 00:17:58.022 "env_dpdk_get_mem_stats", 00:17:58.022 "nbd_get_disks", 00:17:58.022 "nbd_stop_disk", 00:17:58.022 "nbd_start_disk", 00:17:58.022 "ublk_recover_disk", 00:17:58.022 "ublk_get_disks", 00:17:58.022 "ublk_stop_disk", 00:17:58.022 "ublk_start_disk", 00:17:58.022 "ublk_destroy_target", 00:17:58.022 "ublk_create_target", 00:17:58.022 "virtio_blk_create_transport", 00:17:58.022 "virtio_blk_get_transports", 00:17:58.022 "vhost_controller_set_coalescing", 00:17:58.022 "vhost_get_controllers", 00:17:58.022 "vhost_delete_controller", 00:17:58.022 "vhost_create_blk_controller", 00:17:58.022 "vhost_scsi_controller_remove_target", 00:17:58.022 "vhost_scsi_controller_add_target", 00:17:58.022 "vhost_start_scsi_controller", 00:17:58.022 "vhost_create_scsi_controller", 00:17:58.022 "thread_set_cpumask", 00:17:58.022 "scheduler_set_options", 00:17:58.022 "framework_get_governor", 00:17:58.022 "framework_get_scheduler", 00:17:58.022 "framework_set_scheduler", 00:17:58.022 "framework_get_reactors", 00:17:58.022 "thread_get_io_channels", 00:17:58.022 "thread_get_pollers", 00:17:58.022 "thread_get_stats", 00:17:58.022 "framework_monitor_context_switch", 00:17:58.022 "spdk_kill_instance", 00:17:58.022 "log_enable_timestamps", 00:17:58.022 "log_get_flags", 00:17:58.022 "log_clear_flag", 00:17:58.022 "log_set_flag", 00:17:58.022 "log_get_level", 00:17:58.022 "log_set_level", 00:17:58.022 "log_get_print_level", 00:17:58.022 "log_set_print_level", 00:17:58.022 "framework_enable_cpumask_locks", 00:17:58.022 "framework_disable_cpumask_locks", 00:17:58.022 "framework_wait_init", 00:17:58.022 "framework_start_init", 00:17:58.022 "scsi_get_devices", 00:17:58.022 "bdev_get_histogram", 00:17:58.022 "bdev_enable_histogram", 00:17:58.022 "bdev_set_qos_limit", 00:17:58.022 "bdev_set_qd_sampling_period", 00:17:58.022 "bdev_get_bdevs", 00:17:58.022 "bdev_reset_iostat", 00:17:58.022 "bdev_get_iostat", 00:17:58.022 "bdev_examine", 00:17:58.022 "bdev_wait_for_examine", 00:17:58.022 "bdev_set_options", 00:17:58.022 "accel_get_stats", 00:17:58.022 "accel_set_options", 00:17:58.022 "accel_set_driver", 00:17:58.022 "accel_crypto_key_destroy", 00:17:58.022 "accel_crypto_keys_get", 00:17:58.022 "accel_crypto_key_create", 00:17:58.022 "accel_assign_opc", 00:17:58.022 "accel_get_module_info", 00:17:58.022 "accel_get_opc_assignments", 00:17:58.022 "vmd_rescan", 00:17:58.022 "vmd_remove_device", 00:17:58.022 "vmd_enable", 00:17:58.022 "sock_get_default_impl", 00:17:58.022 "sock_set_default_impl", 00:17:58.022 "sock_impl_set_options", 00:17:58.022 "sock_impl_get_options", 00:17:58.022 "iobuf_get_stats", 00:17:58.022 "iobuf_set_options", 00:17:58.022 "keyring_get_keys", 00:17:58.022 "framework_get_pci_devices", 00:17:58.022 "framework_get_config", 00:17:58.022 "framework_get_subsystems", 00:17:58.022 "fsdev_set_opts", 00:17:58.022 "fsdev_get_opts", 00:17:58.022 "trace_get_info", 00:17:58.022 "trace_get_tpoint_group_mask", 00:17:58.022 "trace_disable_tpoint_group", 00:17:58.022 "trace_enable_tpoint_group", 00:17:58.022 "trace_clear_tpoint_mask", 00:17:58.022 "trace_set_tpoint_mask", 00:17:58.022 "notify_get_notifications", 00:17:58.022 "notify_get_types", 00:17:58.022 "spdk_get_version", 00:17:58.022 "rpc_get_methods" 00:17:58.022 ] 00:17:58.022 20:15:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:17:58.022 20:15:53 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:58.022 20:15:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:58.022 20:15:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:58.022 20:15:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58369 00:17:58.022 20:15:53 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58369 ']' 00:17:58.022 20:15:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58369 00:17:58.022 20:15:53 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:17:58.022 20:15:53 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.022 20:15:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58369 00:17:58.022 20:15:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:58.022 20:15:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:58.022 20:15:53 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58369' 00:17:58.022 killing process with pid 58369 00:17:58.022 20:15:53 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58369 00:17:58.022 20:15:53 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58369 00:18:00.550 00:18:00.550 real 0m3.759s 00:18:00.550 user 0m6.633s 00:18:00.550 sys 0m0.503s 00:18:00.550 ************************************ 00:18:00.550 END TEST spdkcli_tcp 00:18:00.550 ************************************ 00:18:00.550 20:15:55 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.550 20:15:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:00.550 20:15:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:18:00.550 20:15:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:00.550 20:15:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.550 20:15:55 -- common/autotest_common.sh@10 -- # set +x 00:18:00.550 ************************************ 00:18:00.550 START TEST dpdk_mem_utility 00:18:00.550 ************************************ 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:18:00.550 * Looking for test storage... 00:18:00.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:18:00.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.550 20:15:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:00.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.550 --rc genhtml_branch_coverage=1 00:18:00.550 --rc genhtml_function_coverage=1 00:18:00.550 --rc genhtml_legend=1 00:18:00.550 --rc geninfo_all_blocks=1 00:18:00.550 --rc geninfo_unexecuted_blocks=1 00:18:00.550 00:18:00.550 ' 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:00.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.550 --rc genhtml_branch_coverage=1 00:18:00.550 --rc genhtml_function_coverage=1 00:18:00.550 --rc genhtml_legend=1 00:18:00.550 --rc geninfo_all_blocks=1 00:18:00.550 --rc geninfo_unexecuted_blocks=1 00:18:00.550 00:18:00.550 ' 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:00.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.550 --rc genhtml_branch_coverage=1 00:18:00.550 --rc genhtml_function_coverage=1 00:18:00.550 --rc genhtml_legend=1 00:18:00.550 --rc geninfo_all_blocks=1 00:18:00.550 --rc geninfo_unexecuted_blocks=1 00:18:00.550 00:18:00.550 ' 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:00.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.550 --rc genhtml_branch_coverage=1 00:18:00.550 --rc genhtml_function_coverage=1 00:18:00.550 --rc genhtml_legend=1 00:18:00.550 --rc geninfo_all_blocks=1 00:18:00.550 --rc geninfo_unexecuted_blocks=1 00:18:00.550 00:18:00.550 ' 00:18:00.550 20:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:18:00.550 20:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58486 00:18:00.550 20:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58486 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58486 ']' 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.550 20:15:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:00.550 20:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:00.550 [2024-10-01 20:15:55.468398] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:00.550 [2024-10-01 20:15:55.468866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58486 ] 00:18:00.550 [2024-10-01 20:15:55.619068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.809 [2024-10-01 20:15:55.820442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.742 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:01.742 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:18:01.742 20:15:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:18:01.742 20:15:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:18:01.742 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.742 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:01.742 { 00:18:01.742 "filename": "/tmp/spdk_mem_dump.txt" 00:18:01.742 } 00:18:01.742 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.742 20:15:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:18:01.742 DPDK memory size 1106.000000 MiB in 1 heap(s) 00:18:01.742 1 heaps totaling size 1106.000000 MiB 00:18:01.742 size: 1106.000000 MiB heap id: 0 00:18:01.742 end heaps---------- 00:18:01.742 9 mempools totaling size 883.273621 MiB 00:18:01.742 size: 333.169250 MiB name: bdev_io_58486 00:18:01.743 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:18:01.743 size: 158.602051 MiB name: PDU_data_out_Pool 00:18:01.743 size: 51.011292 MiB name: evtpool_58486 00:18:01.743 size: 50.003479 MiB name: msgpool_58486 00:18:01.743 size: 36.509338 MiB name: fsdev_io_58486 00:18:01.743 size: 21.763794 MiB name: PDU_Pool 00:18:01.743 size: 19.513306 MiB name: SCSI_TASK_Pool 00:18:01.743 size: 0.026123 MiB name: Session_Pool 00:18:01.743 end mempools------- 00:18:01.743 6 memzones totaling size 4.142822 MiB 00:18:01.743 size: 1.000366 MiB name: RG_ring_0_58486 00:18:01.743 size: 1.000366 MiB name: RG_ring_1_58486 00:18:01.743 size: 1.000366 MiB name: RG_ring_4_58486 00:18:01.743 size: 1.000366 MiB name: RG_ring_5_58486 00:18:01.743 size: 0.125366 MiB name: RG_ring_2_58486 00:18:01.743 size: 0.015991 MiB name: RG_ring_3_58486 00:18:01.743 end memzones------- 00:18:01.743 20:15:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:18:01.743 heap id: 0 total size: 1106.000000 MiB number of busy elements: 387 number of free elements: 19 00:18:01.743 list of free elements. size: 19.272217 MiB 00:18:01.743 element at address: 0x200000400000 with size: 1.999451 MiB 00:18:01.743 element at address: 0x200000800000 with size: 1.996887 MiB 00:18:01.743 element at address: 0x200009600000 with size: 1.995972 MiB 00:18:01.743 element at address: 0x20000d800000 with size: 1.995972 MiB 00:18:01.743 element at address: 0x200007000000 with size: 1.991028 MiB 00:18:01.743 element at address: 0x20002af00040 with size: 0.999939 MiB 00:18:01.743 element at address: 0x20002b300040 with size: 0.999939 MiB 00:18:01.743 element at address: 0x20002b400000 with size: 0.999084 MiB 00:18:01.743 element at address: 0x200044000000 with size: 0.994324 MiB 00:18:01.743 element at address: 0x20002b700040 with size: 0.936401 MiB 00:18:01.743 element at address: 0x200000200000 with size: 0.829224 MiB 00:18:01.743 element at address: 0x20002ce00000 with size: 0.562683 MiB 00:18:01.743 element at address: 0x20002b000000 with size: 0.488708 MiB 00:18:01.743 element at address: 0x20002b800000 with size: 0.485413 MiB 00:18:01.743 element at address: 0x200003e00000 with size: 0.475769 MiB 00:18:01.743 element at address: 0x20002ac00000 with size: 0.457642 MiB 00:18:01.743 element at address: 0x20003a200000 with size: 0.390442 MiB 00:18:01.743 element at address: 0x200003a00000 with size: 0.350647 MiB 00:18:01.743 element at address: 0x200015e00000 with size: 0.322693 MiB 00:18:01.743 list of standard malloc elements. size: 199.305298 MiB 00:18:01.743 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:18:01.743 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:18:01.743 element at address: 0x20002adfff80 with size: 1.000183 MiB 00:18:01.743 element at address: 0x20002b1fff80 with size: 1.000183 MiB 00:18:01.743 element at address: 0x20002b5fff80 with size: 1.000183 MiB 00:18:01.743 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:18:01.743 element at address: 0x20002b7eff40 with size: 0.062683 MiB 00:18:01.743 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:18:01.743 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:18:01.743 element at address: 0x20002b7efdc0 with size: 0.000366 MiB 00:18:01.743 element at address: 0x200015dff040 with size: 0.000305 MiB 00:18:01.743 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:18:01.743 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7e0c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7e1c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7e2c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7e3c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7e4c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7e5c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7e6c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7e7c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7e8c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003aff700 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003aff980 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003affa80 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003e79cc0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003e79dc0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003e79ec0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003e79fc0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003e7a0c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003e7a1c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003e7a2c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003e7a3c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003e7a4c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003e7a5c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003e7a6c0 with size: 0.000244 MiB 00:18:01.743 element at address: 0x200003e7a7c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7a8c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7a9c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7aac0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7abc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7acc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7adc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7aec0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7afc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7b0c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7b1c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7b2c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7b3c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7b4c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7b5c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7b6c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7b7c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7b8c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7b9c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7bac0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7bbc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7bcc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7bdc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7bec0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7bfc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7c0c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7c1c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7c2c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7c3c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7c4c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7c5c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7c6c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7c7c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7c8c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7c9c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7cac0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7cbc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7ccc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7cdc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7cec0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7cfc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7d0c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7d1c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7d2c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7d3c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7d4c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7d5c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7d6c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7d7c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200003eff000 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015dff180 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015dff280 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015dff380 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015dff480 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015dff580 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015dff680 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015dff780 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015dff880 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015dff980 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:18:01.744 element at address: 0x200015e529c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ac75280 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ac75380 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ac75480 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ac75580 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ac75680 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ac75780 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002acfdd00 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b07d1c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b07d2c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b07d3c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b07d4c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b07d5c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b07d6c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b07d7c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b07d8c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b07d9c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b0fdd00 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b4ffc40 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b7efbc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b7efcc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002b8bc680 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce900c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce901c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce902c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce903c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce904c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce905c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce906c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce907c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce908c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce909c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce90ac0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce90bc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce90cc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce90dc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce90ec0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce90fc0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce910c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce911c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce912c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce913c0 with size: 0.000244 MiB 00:18:01.744 element at address: 0x20002ce914c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce915c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce916c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce917c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce918c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce919c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce91ac0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce91bc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce91cc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce91dc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce91ec0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce91fc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce920c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce921c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce922c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce923c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce924c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce925c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce926c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce927c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce928c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce929c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce92ac0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce92bc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce92cc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce92dc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce92ec0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce92fc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce930c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce931c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce932c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce933c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce934c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce935c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce936c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce937c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce938c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce939c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce93ac0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce93bc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce93cc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce93dc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce93ec0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce93fc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce940c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce941c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce942c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce943c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce944c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce945c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce946c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce947c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce948c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce949c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce94ac0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce94bc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce94cc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce94dc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce94ec0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce94fc0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce950c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce951c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce952c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20002ce953c0 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a263f40 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a264040 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26ad00 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26af80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26b080 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26b180 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26b280 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26b380 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26b480 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26b580 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26b680 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26b780 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26b880 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26b980 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26ba80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26bb80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26bc80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26bd80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26be80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26bf80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26c080 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26c180 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26c280 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26c380 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26c480 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26c580 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26c680 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26c780 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26c880 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26c980 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26ca80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26cb80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26cc80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26cd80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26ce80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26cf80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26d080 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26d180 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26d280 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26d380 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26d480 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26d580 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26d680 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26d780 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26d880 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26d980 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26da80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26db80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26dc80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26dd80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26de80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26df80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26e080 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26e180 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26e280 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26e380 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26e480 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26e580 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26e680 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26e780 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26e880 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26e980 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26ea80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26eb80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26ec80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26ed80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26ee80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26ef80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26f080 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26f180 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26f280 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26f380 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26f480 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26f580 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26f680 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26f780 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26f880 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26f980 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26fa80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26fb80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26fc80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26fd80 with size: 0.000244 MiB 00:18:01.745 element at address: 0x20003a26fe80 with size: 0.000244 MiB 00:18:01.745 list of memzone associated elements. size: 887.422485 MiB 00:18:01.745 element at address: 0x200015f54c40 with size: 332.668884 MiB 00:18:01.746 associated memzone info: size: 332.668701 MiB name: MP_bdev_io_58486_0 00:18:01.746 element at address: 0x20002ce954c0 with size: 211.416809 MiB 00:18:01.746 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:18:01.746 element at address: 0x20003a26ff80 with size: 157.562622 MiB 00:18:01.746 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:18:01.746 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:18:01.746 associated memzone info: size: 48.002930 MiB name: MP_evtpool_58486_0 00:18:01.746 element at address: 0x200003fff340 with size: 48.003113 MiB 00:18:01.746 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58486_0 00:18:01.746 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:18:01.746 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58486_0 00:18:01.746 element at address: 0x20002b9be900 with size: 20.255615 MiB 00:18:01.746 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:18:01.746 element at address: 0x2000441feb00 with size: 18.005127 MiB 00:18:01.746 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:18:01.746 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:18:01.746 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_58486 00:18:01.746 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:18:01.746 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58486 00:18:01.746 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:18:01.746 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58486 00:18:01.746 element at address: 0x20002b0fde00 with size: 1.008179 MiB 00:18:01.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:18:01.746 element at address: 0x20002b8bc780 with size: 1.008179 MiB 00:18:01.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:18:01.746 element at address: 0x20002acfde00 with size: 1.008179 MiB 00:18:01.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:18:01.746 element at address: 0x200015e52ac0 with size: 1.008179 MiB 00:18:01.746 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:18:01.746 element at address: 0x200003eff100 with size: 1.000549 MiB 00:18:01.746 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58486 00:18:01.746 element at address: 0x200003affb80 with size: 1.000549 MiB 00:18:01.746 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58486 00:18:01.746 element at address: 0x20002b4ffd40 with size: 1.000549 MiB 00:18:01.746 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58486 00:18:01.746 element at address: 0x2000440fe8c0 with size: 1.000549 MiB 00:18:01.746 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58486 00:18:01.746 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:18:01.746 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58486 00:18:01.746 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:18:01.746 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58486 00:18:01.746 element at address: 0x20002b07dac0 with size: 0.500549 MiB 00:18:01.746 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:18:01.746 element at address: 0x20002ac75880 with size: 0.500549 MiB 00:18:01.746 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:18:01.746 element at address: 0x20002b87c440 with size: 0.250549 MiB 00:18:01.746 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:18:01.746 element at address: 0x200003a5de80 with size: 0.125549 MiB 00:18:01.746 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58486 00:18:01.746 element at address: 0x20002acf5ac0 with size: 0.031799 MiB 00:18:01.746 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:18:01.746 element at address: 0x20003a264140 with size: 0.023804 MiB 00:18:01.746 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:18:01.746 element at address: 0x200003a59c40 with size: 0.016174 MiB 00:18:01.746 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58486 00:18:01.746 element at address: 0x20003a26a2c0 with size: 0.002502 MiB 00:18:01.746 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:18:01.746 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:18:01.746 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58486 00:18:01.746 element at address: 0x200003aff800 with size: 0.000366 MiB 00:18:01.746 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58486 00:18:01.746 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:18:01.746 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58486 00:18:01.746 element at address: 0x20003a26ae00 with size: 0.000366 MiB 00:18:01.746 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:18:01.746 20:15:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:18:01.746 20:15:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58486 00:18:01.746 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58486 ']' 00:18:01.746 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58486 00:18:01.746 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:18:01.746 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:01.746 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58486 00:18:01.746 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:01.746 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:01.746 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58486' 00:18:01.746 killing process with pid 58486 00:18:01.746 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58486 00:18:01.746 20:15:56 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58486 00:18:03.643 00:18:03.643 real 0m3.350s 00:18:03.643 user 0m3.249s 00:18:03.643 sys 0m0.467s 00:18:03.643 20:15:58 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:03.643 20:15:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:03.643 ************************************ 00:18:03.643 END TEST dpdk_mem_utility 00:18:03.643 ************************************ 00:18:03.643 20:15:58 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:18:03.643 20:15:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:03.643 20:15:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:03.643 20:15:58 -- common/autotest_common.sh@10 -- # set +x 00:18:03.643 ************************************ 00:18:03.643 START TEST event 00:18:03.643 ************************************ 00:18:03.643 20:15:58 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:18:03.643 * Looking for test storage... 00:18:03.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:18:03.643 20:15:58 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:03.643 20:15:58 event -- common/autotest_common.sh@1681 -- # lcov --version 00:18:03.643 20:15:58 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:03.643 20:15:58 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:03.643 20:15:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.643 20:15:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.643 20:15:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.643 20:15:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.643 20:15:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.643 20:15:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.643 20:15:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.643 20:15:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.643 20:15:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.643 20:15:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.643 20:15:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.643 20:15:58 event -- scripts/common.sh@344 -- # case "$op" in 00:18:03.643 20:15:58 event -- scripts/common.sh@345 -- # : 1 00:18:03.643 20:15:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.643 20:15:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.643 20:15:58 event -- scripts/common.sh@365 -- # decimal 1 00:18:03.643 20:15:58 event -- scripts/common.sh@353 -- # local d=1 00:18:03.643 20:15:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.643 20:15:58 event -- scripts/common.sh@355 -- # echo 1 00:18:03.643 20:15:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.643 20:15:58 event -- scripts/common.sh@366 -- # decimal 2 00:18:03.643 20:15:58 event -- scripts/common.sh@353 -- # local d=2 00:18:03.643 20:15:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.643 20:15:58 event -- scripts/common.sh@355 -- # echo 2 00:18:03.643 20:15:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.643 20:15:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.643 20:15:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.643 20:15:58 event -- scripts/common.sh@368 -- # return 0 00:18:03.643 20:15:58 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.643 20:15:58 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:03.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.643 --rc genhtml_branch_coverage=1 00:18:03.643 --rc genhtml_function_coverage=1 00:18:03.643 --rc genhtml_legend=1 00:18:03.643 --rc geninfo_all_blocks=1 00:18:03.643 --rc geninfo_unexecuted_blocks=1 00:18:03.643 00:18:03.643 ' 00:18:03.643 20:15:58 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:03.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.643 --rc genhtml_branch_coverage=1 00:18:03.643 --rc genhtml_function_coverage=1 00:18:03.643 --rc genhtml_legend=1 00:18:03.643 --rc geninfo_all_blocks=1 00:18:03.643 --rc geninfo_unexecuted_blocks=1 00:18:03.643 00:18:03.643 ' 00:18:03.643 20:15:58 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:03.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.643 --rc genhtml_branch_coverage=1 00:18:03.643 --rc genhtml_function_coverage=1 00:18:03.643 --rc genhtml_legend=1 00:18:03.643 --rc geninfo_all_blocks=1 00:18:03.643 --rc geninfo_unexecuted_blocks=1 00:18:03.643 00:18:03.643 ' 00:18:03.643 20:15:58 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:03.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.643 --rc genhtml_branch_coverage=1 00:18:03.643 --rc genhtml_function_coverage=1 00:18:03.643 --rc genhtml_legend=1 00:18:03.643 --rc geninfo_all_blocks=1 00:18:03.643 --rc geninfo_unexecuted_blocks=1 00:18:03.643 00:18:03.643 ' 00:18:03.643 20:15:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:03.643 20:15:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:18:03.643 20:15:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:18:03.643 20:15:58 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:18:03.643 20:15:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:03.643 20:15:58 event -- common/autotest_common.sh@10 -- # set +x 00:18:03.643 ************************************ 00:18:03.643 START TEST event_perf 00:18:03.643 ************************************ 00:18:03.643 20:15:58 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:18:03.643 Running I/O for 1 seconds...[2024-10-01 20:15:58.828456] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:03.643 [2024-10-01 20:15:58.828657] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58594 ] 00:18:03.902 [2024-10-01 20:15:58.978787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:04.159 [2024-10-01 20:15:59.221106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.159 [2024-10-01 20:15:59.221347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.160 Running I/O for 1 seconds...[2024-10-01 20:15:59.221711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.160 [2024-10-01 20:15:59.221724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.532 00:18:05.532 lcore 0: 203064 00:18:05.532 lcore 1: 203064 00:18:05.532 lcore 2: 203064 00:18:05.532 lcore 3: 203061 00:18:05.532 done. 00:18:05.532 00:18:05.532 real 0m1.712s 00:18:05.532 user 0m4.489s 00:18:05.532 sys 0m0.099s 00:18:05.532 20:16:00 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:05.532 20:16:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:18:05.532 ************************************ 00:18:05.532 END TEST event_perf 00:18:05.532 ************************************ 00:18:05.532 20:16:00 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:18:05.532 20:16:00 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:05.532 20:16:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:05.532 20:16:00 event -- common/autotest_common.sh@10 -- # set +x 00:18:05.532 ************************************ 00:18:05.532 START TEST event_reactor 00:18:05.532 ************************************ 00:18:05.532 20:16:00 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:18:05.532 [2024-10-01 20:16:00.597308] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:05.532 [2024-10-01 20:16:00.597419] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58628 ] 00:18:05.789 [2024-10-01 20:16:00.746680] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.789 [2024-10-01 20:16:00.961294] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.170 test_start 00:18:07.170 oneshot 00:18:07.170 tick 100 00:18:07.170 tick 100 00:18:07.170 tick 250 00:18:07.170 tick 100 00:18:07.170 tick 100 00:18:07.170 tick 100 00:18:07.170 tick 250 00:18:07.171 tick 500 00:18:07.171 tick 100 00:18:07.171 tick 100 00:18:07.171 tick 250 00:18:07.171 tick 100 00:18:07.171 tick 100 00:18:07.171 test_end 00:18:07.171 00:18:07.171 real 0m1.674s 00:18:07.171 user 0m1.478s 00:18:07.171 sys 0m0.086s 00:18:07.171 ************************************ 00:18:07.171 END TEST event_reactor 00:18:07.171 ************************************ 00:18:07.171 20:16:02 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:07.171 20:16:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:18:07.171 20:16:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:18:07.171 20:16:02 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:07.171 20:16:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:07.171 20:16:02 event -- common/autotest_common.sh@10 -- # set +x 00:18:07.171 ************************************ 00:18:07.171 START TEST event_reactor_perf 00:18:07.171 ************************************ 00:18:07.171 20:16:02 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:18:07.171 [2024-10-01 20:16:02.310176] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:07.171 [2024-10-01 20:16:02.310448] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58670 ] 00:18:07.428 [2024-10-01 20:16:02.461854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.694 [2024-10-01 20:16:02.674179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.068 test_start 00:18:09.068 test_end 00:18:09.068 Performance: 312760 events per second 00:18:09.068 ************************************ 00:18:09.068 END TEST event_reactor_perf 00:18:09.068 ************************************ 00:18:09.068 00:18:09.068 real 0m1.666s 00:18:09.068 user 0m1.475s 00:18:09.068 sys 0m0.081s 00:18:09.068 20:16:03 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:09.068 20:16:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:18:09.068 20:16:03 event -- event/event.sh@49 -- # uname -s 00:18:09.068 20:16:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:18:09.068 20:16:03 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:18:09.068 20:16:03 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:09.068 20:16:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:09.068 20:16:03 event -- common/autotest_common.sh@10 -- # set +x 00:18:09.068 ************************************ 00:18:09.068 START TEST event_scheduler 00:18:09.068 ************************************ 00:18:09.068 20:16:03 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:18:09.068 * Looking for test storage... 00:18:09.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:18:09.068 20:16:04 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:09.068 20:16:04 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:18:09.068 20:16:04 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:09.068 20:16:04 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:09.068 20:16:04 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.068 20:16:04 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.068 20:16:04 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.068 20:16:04 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.068 20:16:04 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.068 20:16:04 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.068 20:16:04 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.068 20:16:04 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.068 20:16:04 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.069 20:16:04 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:18:09.069 20:16:04 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.069 20:16:04 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:09.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.069 --rc genhtml_branch_coverage=1 00:18:09.069 --rc genhtml_function_coverage=1 00:18:09.069 --rc genhtml_legend=1 00:18:09.069 --rc geninfo_all_blocks=1 00:18:09.069 --rc geninfo_unexecuted_blocks=1 00:18:09.069 00:18:09.069 ' 00:18:09.069 20:16:04 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:09.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.069 --rc genhtml_branch_coverage=1 00:18:09.069 --rc genhtml_function_coverage=1 00:18:09.069 --rc genhtml_legend=1 00:18:09.069 --rc geninfo_all_blocks=1 00:18:09.069 --rc geninfo_unexecuted_blocks=1 00:18:09.069 00:18:09.069 ' 00:18:09.069 20:16:04 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:09.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.069 --rc genhtml_branch_coverage=1 00:18:09.069 --rc genhtml_function_coverage=1 00:18:09.069 --rc genhtml_legend=1 00:18:09.069 --rc geninfo_all_blocks=1 00:18:09.069 --rc geninfo_unexecuted_blocks=1 00:18:09.069 00:18:09.069 ' 00:18:09.069 20:16:04 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:09.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.069 --rc genhtml_branch_coverage=1 00:18:09.069 --rc genhtml_function_coverage=1 00:18:09.069 --rc genhtml_legend=1 00:18:09.069 --rc geninfo_all_blocks=1 00:18:09.069 --rc geninfo_unexecuted_blocks=1 00:18:09.069 00:18:09.069 ' 00:18:09.069 20:16:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:18:09.069 20:16:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58746 00:18:09.069 20:16:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:18:09.069 20:16:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:18:09.069 20:16:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58746 00:18:09.069 20:16:04 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58746 ']' 00:18:09.069 20:16:04 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.069 20:16:04 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:09.069 20:16:04 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.069 20:16:04 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:09.069 20:16:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:09.069 [2024-10-01 20:16:04.194756] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:09.069 [2024-10-01 20:16:04.195024] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58746 ] 00:18:09.327 [2024-10-01 20:16:04.341794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:09.585 [2024-10-01 20:16:04.554162] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.585 [2024-10-01 20:16:04.554430] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.585 [2024-10-01 20:16:04.554675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.585 [2024-10-01 20:16:04.554690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:10.151 20:16:05 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.151 20:16:05 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:18:10.151 20:16:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:18:10.151 20:16:05 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.151 20:16:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:10.151 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:10.151 POWER: Cannot set governor of lcore 0 to userspace 00:18:10.151 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:10.151 POWER: Cannot set governor of lcore 0 to performance 00:18:10.151 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:10.151 POWER: Cannot set governor of lcore 0 to userspace 00:18:10.151 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:10.151 POWER: Cannot set governor of lcore 0 to userspace 00:18:10.151 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:18:10.151 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:18:10.151 POWER: Unable to set Power Management Environment for lcore 0 00:18:10.151 [2024-10-01 20:16:05.111929] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:18:10.151 [2024-10-01 20:16:05.111948] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:18:10.151 [2024-10-01 20:16:05.111957] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:18:10.151 [2024-10-01 20:16:05.111972] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:18:10.151 [2024-10-01 20:16:05.111979] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:18:10.151 [2024-10-01 20:16:05.111988] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:18:10.151 20:16:05 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.151 20:16:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:18:10.151 20:16:05 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.151 20:16:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 [2024-10-01 20:16:05.519371] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:18:10.409 20:16:05 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.409 20:16:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:18:10.409 20:16:05 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:10.409 20:16:05 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:10.409 20:16:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 ************************************ 00:18:10.409 START TEST scheduler_create_thread 00:18:10.409 ************************************ 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 2 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 3 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 4 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 5 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 6 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 7 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 8 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 9 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 10 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.409 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.666 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.666 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:18:10.666 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.666 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.666 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.666 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:18:10.666 20:16:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:18:10.666 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.666 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.666 ************************************ 00:18:10.666 END TEST scheduler_create_thread 00:18:10.666 ************************************ 00:18:10.666 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.666 00:18:10.666 real 0m0.106s 00:18:10.666 user 0m0.015s 00:18:10.666 sys 0m0.003s 00:18:10.666 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:10.666 20:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:10.667 20:16:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:10.667 20:16:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58746 00:18:10.667 20:16:05 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58746 ']' 00:18:10.667 20:16:05 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58746 00:18:10.667 20:16:05 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:18:10.667 20:16:05 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.667 20:16:05 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58746 00:18:10.667 killing process with pid 58746 00:18:10.667 20:16:05 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:10.667 20:16:05 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:10.667 20:16:05 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58746' 00:18:10.667 20:16:05 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58746 00:18:10.667 20:16:05 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58746 00:18:10.924 [2024-10-01 20:16:06.120344] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:18:12.300 00:18:12.300 real 0m3.334s 00:18:12.300 user 0m6.922s 00:18:12.300 sys 0m0.395s 00:18:12.300 20:16:07 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:12.300 ************************************ 00:18:12.300 END TEST event_scheduler 00:18:12.300 ************************************ 00:18:12.300 20:16:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:12.300 20:16:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:18:12.300 20:16:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:18:12.300 20:16:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:12.300 20:16:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:12.300 20:16:07 event -- common/autotest_common.sh@10 -- # set +x 00:18:12.300 ************************************ 00:18:12.300 START TEST app_repeat 00:18:12.300 ************************************ 00:18:12.301 20:16:07 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:18:12.301 Process app_repeat pid: 58830 00:18:12.301 spdk_app_start Round 0 00:18:12.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58830 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58830' 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58830 /var/tmp/spdk-nbd.sock 00:18:12.301 20:16:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:18:12.301 20:16:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58830 ']' 00:18:12.301 20:16:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:12.301 20:16:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.301 20:16:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:12.301 20:16:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.301 20:16:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:12.301 [2024-10-01 20:16:07.421616] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:12.301 [2024-10-01 20:16:07.421778] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58830 ] 00:18:12.559 [2024-10-01 20:16:07.575113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:12.816 [2024-10-01 20:16:07.781568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.816 [2024-10-01 20:16:07.781587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.177 20:16:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.177 20:16:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:18:13.177 20:16:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:13.437 Malloc0 00:18:13.437 20:16:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:13.696 Malloc1 00:18:13.696 20:16:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:13.696 20:16:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:13.955 /dev/nbd0 00:18:13.955 20:16:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:13.955 20:16:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:13.955 1+0 records in 00:18:13.955 1+0 records out 00:18:13.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252358 s, 16.2 MB/s 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:13.955 20:16:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:18:13.955 20:16:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:13.955 20:16:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:13.955 20:16:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:13.955 /dev/nbd1 00:18:14.214 20:16:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:14.214 20:16:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:14.214 1+0 records in 00:18:14.214 1+0 records out 00:18:14.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285155 s, 14.4 MB/s 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:14.214 20:16:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:18:14.214 20:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:14.214 20:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:14.214 20:16:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:14.214 20:16:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:14.214 20:16:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:14.214 20:16:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:14.214 { 00:18:14.214 "nbd_device": "/dev/nbd0", 00:18:14.214 "bdev_name": "Malloc0" 00:18:14.214 }, 00:18:14.214 { 00:18:14.214 "nbd_device": "/dev/nbd1", 00:18:14.214 "bdev_name": "Malloc1" 00:18:14.214 } 00:18:14.214 ]' 00:18:14.214 20:16:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:14.214 { 00:18:14.214 "nbd_device": "/dev/nbd0", 00:18:14.214 "bdev_name": "Malloc0" 00:18:14.214 }, 00:18:14.214 { 00:18:14.214 "nbd_device": "/dev/nbd1", 00:18:14.214 "bdev_name": "Malloc1" 00:18:14.214 } 00:18:14.214 ]' 00:18:14.214 20:16:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:14.472 20:16:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:14.472 /dev/nbd1' 00:18:14.472 20:16:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:14.472 /dev/nbd1' 00:18:14.472 20:16:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:14.472 20:16:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:14.472 20:16:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:14.472 20:16:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:14.472 20:16:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:14.472 20:16:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:14.473 256+0 records in 00:18:14.473 256+0 records out 00:18:14.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103079 s, 102 MB/s 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:14.473 256+0 records in 00:18:14.473 256+0 records out 00:18:14.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209453 s, 50.1 MB/s 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:14.473 256+0 records in 00:18:14.473 256+0 records out 00:18:14.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183102 s, 57.3 MB/s 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:14.473 20:16:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:14.731 20:16:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:14.989 20:16:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:14.989 20:16:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:14.989 20:16:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:15.247 20:16:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:15.247 20:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:15.247 20:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:15.247 20:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:15.247 20:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:15.247 20:16:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:15.247 20:16:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:15.247 20:16:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:15.247 20:16:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:15.247 20:16:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:15.505 20:16:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:16.911 [2024-10-01 20:16:11.814381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:16.911 [2024-10-01 20:16:11.999742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.911 [2024-10-01 20:16:11.999751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.911 [2024-10-01 20:16:12.110433] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:16.911 [2024-10-01 20:16:12.110513] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:18.810 spdk_app_start Round 1 00:18:18.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:18.810 20:16:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:18.810 20:16:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:18:18.810 20:16:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58830 /var/tmp/spdk-nbd.sock 00:18:18.810 20:16:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58830 ']' 00:18:18.810 20:16:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:18.810 20:16:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.810 20:16:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:18.810 20:16:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.810 20:16:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:18.810 20:16:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.810 20:16:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:18:18.810 20:16:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:18.810 Malloc0 00:18:18.810 20:16:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:19.068 Malloc1 00:18:19.068 20:16:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:19.068 20:16:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:19.326 /dev/nbd0 00:18:19.326 20:16:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:19.326 20:16:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:19.326 1+0 records in 00:18:19.326 1+0 records out 00:18:19.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225541 s, 18.2 MB/s 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:19.326 20:16:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:18:19.326 20:16:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.326 20:16:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:19.326 20:16:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:19.585 /dev/nbd1 00:18:19.585 20:16:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:19.585 20:16:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:19.585 1+0 records in 00:18:19.585 1+0 records out 00:18:19.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200628 s, 20.4 MB/s 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:19.585 20:16:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:18:19.585 20:16:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.585 20:16:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:19.585 20:16:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:19.585 20:16:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.585 20:16:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:19.844 { 00:18:19.844 "nbd_device": "/dev/nbd0", 00:18:19.844 "bdev_name": "Malloc0" 00:18:19.844 }, 00:18:19.844 { 00:18:19.844 "nbd_device": "/dev/nbd1", 00:18:19.844 "bdev_name": "Malloc1" 00:18:19.844 } 00:18:19.844 ]' 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:19.844 { 00:18:19.844 "nbd_device": "/dev/nbd0", 00:18:19.844 "bdev_name": "Malloc0" 00:18:19.844 }, 00:18:19.844 { 00:18:19.844 "nbd_device": "/dev/nbd1", 00:18:19.844 "bdev_name": "Malloc1" 00:18:19.844 } 00:18:19.844 ]' 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:19.844 /dev/nbd1' 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:19.844 /dev/nbd1' 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:19.844 256+0 records in 00:18:19.844 256+0 records out 00:18:19.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00958161 s, 109 MB/s 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:19.844 256+0 records in 00:18:19.844 256+0 records out 00:18:19.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150117 s, 69.9 MB/s 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:19.844 256+0 records in 00:18:19.844 256+0 records out 00:18:19.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169833 s, 61.7 MB/s 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.844 20:16:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.103 20:16:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:20.362 20:16:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:20.362 20:16:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:20.932 20:16:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:22.314 [2024-10-01 20:16:17.139368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:22.314 [2024-10-01 20:16:17.344773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.314 [2024-10-01 20:16:17.344789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.314 [2024-10-01 20:16:17.474941] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:22.314 [2024-10-01 20:16:17.474999] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:23.686 20:16:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:23.686 spdk_app_start Round 2 00:18:23.686 20:16:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:18:23.686 20:16:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58830 /var/tmp/spdk-nbd.sock 00:18:23.686 20:16:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58830 ']' 00:18:23.686 20:16:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:23.686 20:16:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:23.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:23.686 20:16:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:23.686 20:16:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:23.686 20:16:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:23.944 20:16:19 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.944 20:16:19 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:18:23.944 20:16:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:24.201 Malloc0 00:18:24.201 20:16:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:24.459 Malloc1 00:18:24.459 20:16:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.459 20:16:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:24.717 /dev/nbd0 00:18:24.717 20:16:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:24.717 20:16:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:24.717 1+0 records in 00:18:24.717 1+0 records out 00:18:24.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425837 s, 9.6 MB/s 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:24.717 20:16:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:18:24.717 20:16:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.717 20:16:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.717 20:16:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:24.974 /dev/nbd1 00:18:24.974 20:16:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:24.974 20:16:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:24.974 1+0 records in 00:18:24.974 1+0 records out 00:18:24.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002252 s, 18.2 MB/s 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:24.974 20:16:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:18:24.974 20:16:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.974 20:16:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.974 20:16:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:24.974 20:16:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:24.974 20:16:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:25.232 { 00:18:25.232 "nbd_device": "/dev/nbd0", 00:18:25.232 "bdev_name": "Malloc0" 00:18:25.232 }, 00:18:25.232 { 00:18:25.232 "nbd_device": "/dev/nbd1", 00:18:25.232 "bdev_name": "Malloc1" 00:18:25.232 } 00:18:25.232 ]' 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:25.232 { 00:18:25.232 "nbd_device": "/dev/nbd0", 00:18:25.232 "bdev_name": "Malloc0" 00:18:25.232 }, 00:18:25.232 { 00:18:25.232 "nbd_device": "/dev/nbd1", 00:18:25.232 "bdev_name": "Malloc1" 00:18:25.232 } 00:18:25.232 ]' 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:25.232 /dev/nbd1' 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:25.232 /dev/nbd1' 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:25.232 256+0 records in 00:18:25.232 256+0 records out 00:18:25.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00984706 s, 106 MB/s 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:25.232 256+0 records in 00:18:25.232 256+0 records out 00:18:25.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187766 s, 55.8 MB/s 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:25.232 256+0 records in 00:18:25.232 256+0 records out 00:18:25.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182739 s, 57.4 MB/s 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.232 20:16:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:25.490 20:16:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:25.748 20:16:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:25.748 20:16:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:25.748 20:16:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:25.748 20:16:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:25.748 20:16:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:25.748 20:16:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:26.005 20:16:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:26.005 20:16:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:26.005 20:16:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:26.005 20:16:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:26.005 20:16:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:26.005 20:16:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:26.005 20:16:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:26.262 20:16:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:27.193 [2024-10-01 20:16:22.283456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:27.450 [2024-10-01 20:16:22.459987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.450 [2024-10-01 20:16:22.460154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.450 [2024-10-01 20:16:22.572288] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:27.450 [2024-10-01 20:16:22.572339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:29.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:29.350 20:16:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58830 /var/tmp/spdk-nbd.sock 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58830 ']' 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:18:29.350 20:16:24 event.app_repeat -- event/event.sh@39 -- # killprocess 58830 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58830 ']' 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58830 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58830 00:18:29.350 killing process with pid 58830 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58830' 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58830 00:18:29.350 20:16:24 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58830 00:18:30.285 spdk_app_start is called in Round 0. 00:18:30.285 Shutdown signal received, stop current app iteration 00:18:30.285 Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 reinitialization... 00:18:30.285 spdk_app_start is called in Round 1. 00:18:30.285 Shutdown signal received, stop current app iteration 00:18:30.285 Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 reinitialization... 00:18:30.285 spdk_app_start is called in Round 2. 00:18:30.285 Shutdown signal received, stop current app iteration 00:18:30.285 Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 reinitialization... 00:18:30.285 spdk_app_start is called in Round 3. 00:18:30.285 Shutdown signal received, stop current app iteration 00:18:30.285 20:16:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:18:30.285 20:16:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:18:30.285 00:18:30.285 real 0m18.105s 00:18:30.285 user 0m38.285s 00:18:30.285 sys 0m2.317s 00:18:30.285 20:16:25 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:30.285 ************************************ 00:18:30.285 END TEST app_repeat 00:18:30.285 ************************************ 00:18:30.285 20:16:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:30.542 20:16:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:18:30.542 20:16:25 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:18:30.542 20:16:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:30.542 20:16:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:30.542 20:16:25 event -- common/autotest_common.sh@10 -- # set +x 00:18:30.542 ************************************ 00:18:30.542 START TEST cpu_locks 00:18:30.542 ************************************ 00:18:30.542 20:16:25 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:18:30.542 * Looking for test storage... 00:18:30.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:18:30.542 20:16:25 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:30.542 20:16:25 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:30.542 20:16:25 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:18:30.542 20:16:25 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:30.542 20:16:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:30.542 20:16:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:30.543 20:16:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:18:30.543 20:16:25 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:30.543 20:16:25 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:30.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.543 --rc genhtml_branch_coverage=1 00:18:30.543 --rc genhtml_function_coverage=1 00:18:30.543 --rc genhtml_legend=1 00:18:30.543 --rc geninfo_all_blocks=1 00:18:30.543 --rc geninfo_unexecuted_blocks=1 00:18:30.543 00:18:30.543 ' 00:18:30.543 20:16:25 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:30.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.543 --rc genhtml_branch_coverage=1 00:18:30.543 --rc genhtml_function_coverage=1 00:18:30.543 --rc genhtml_legend=1 00:18:30.543 --rc geninfo_all_blocks=1 00:18:30.543 --rc geninfo_unexecuted_blocks=1 00:18:30.543 00:18:30.543 ' 00:18:30.543 20:16:25 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:30.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.543 --rc genhtml_branch_coverage=1 00:18:30.543 --rc genhtml_function_coverage=1 00:18:30.543 --rc genhtml_legend=1 00:18:30.543 --rc geninfo_all_blocks=1 00:18:30.543 --rc geninfo_unexecuted_blocks=1 00:18:30.543 00:18:30.543 ' 00:18:30.543 20:16:25 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:30.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.543 --rc genhtml_branch_coverage=1 00:18:30.543 --rc genhtml_function_coverage=1 00:18:30.543 --rc genhtml_legend=1 00:18:30.543 --rc geninfo_all_blocks=1 00:18:30.543 --rc geninfo_unexecuted_blocks=1 00:18:30.543 00:18:30.543 ' 00:18:30.543 20:16:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:18:30.543 20:16:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:18:30.543 20:16:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:18:30.543 20:16:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:18:30.543 20:16:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:30.543 20:16:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:30.543 20:16:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:30.543 ************************************ 00:18:30.543 START TEST default_locks 00:18:30.543 ************************************ 00:18:30.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.543 20:16:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:18:30.543 20:16:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59266 00:18:30.543 20:16:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59266 00:18:30.543 20:16:25 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59266 ']' 00:18:30.543 20:16:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.543 20:16:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.543 20:16:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.543 20:16:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:30.543 20:16:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.543 20:16:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:30.801 [2024-10-01 20:16:25.763430] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:30.801 [2024-10-01 20:16:25.763542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59266 ] 00:18:30.801 [2024-10-01 20:16:25.910656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.058 [2024-10-01 20:16:26.092441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.624 20:16:26 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:31.624 20:16:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:18:31.624 20:16:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59266 00:18:31.624 20:16:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:31.624 20:16:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59266 00:18:31.882 20:16:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59266 00:18:31.882 20:16:27 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59266 ']' 00:18:31.882 20:16:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59266 00:18:31.882 20:16:27 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:18:31.882 20:16:27 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:31.882 20:16:27 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59266 00:18:32.140 killing process with pid 59266 00:18:32.140 20:16:27 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:32.140 20:16:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:32.140 20:16:27 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59266' 00:18:32.140 20:16:27 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59266 00:18:32.140 20:16:27 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59266 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59266 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59266 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:18:34.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.033 ERROR: process (pid: 59266) is no longer running 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59266 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59266 ']' 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:34.033 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59266) - No such process 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:18:34.033 00:18:34.033 real 0m3.073s 00:18:34.033 user 0m3.027s 00:18:34.033 sys 0m0.581s 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:34.033 20:16:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:34.033 ************************************ 00:18:34.033 END TEST default_locks 00:18:34.033 ************************************ 00:18:34.033 20:16:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:18:34.033 20:16:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:34.034 20:16:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:34.034 20:16:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:34.034 ************************************ 00:18:34.034 START TEST default_locks_via_rpc 00:18:34.034 ************************************ 00:18:34.034 20:16:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:18:34.034 20:16:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59330 00:18:34.034 20:16:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59330 00:18:34.034 20:16:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59330 ']' 00:18:34.034 20:16:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.034 20:16:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.034 20:16:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:34.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.034 20:16:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.034 20:16:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.034 20:16:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.034 [2024-10-01 20:16:28.872251] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:34.034 [2024-10-01 20:16:28.872512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59330 ] 00:18:34.034 [2024-10-01 20:16:29.023066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.034 [2024-10-01 20:16:29.235878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59330 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59330 00:18:34.966 20:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:35.531 20:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59330 00:18:35.531 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59330 ']' 00:18:35.531 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59330 00:18:35.531 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:18:35.531 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.531 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59330 00:18:35.531 killing process with pid 59330 00:18:35.531 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:35.531 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:35.531 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59330' 00:18:35.531 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59330 00:18:35.531 20:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59330 00:18:37.428 00:18:37.428 real 0m3.725s 00:18:37.428 user 0m3.701s 00:18:37.428 sys 0m0.598s 00:18:37.428 20:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.428 ************************************ 00:18:37.428 END TEST default_locks_via_rpc 00:18:37.428 ************************************ 00:18:37.428 20:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.428 20:16:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:18:37.428 20:16:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:37.428 20:16:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:37.428 20:16:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:37.428 ************************************ 00:18:37.428 START TEST non_locking_app_on_locked_coremask 00:18:37.428 ************************************ 00:18:37.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.428 20:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:18:37.428 20:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59399 00:18:37.428 20:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59399 /var/tmp/spdk.sock 00:18:37.428 20:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59399 ']' 00:18:37.428 20:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:37.428 20:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.428 20:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:37.428 20:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.428 20:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:37.428 20:16:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:37.686 [2024-10-01 20:16:32.647471] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:37.686 [2024-10-01 20:16:32.647601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59399 ] 00:18:37.686 [2024-10-01 20:16:32.797527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.943 [2024-10-01 20:16:33.011912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:38.875 20:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:38.875 20:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:18:38.875 20:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59420 00:18:38.875 20:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59420 /var/tmp/spdk2.sock 00:18:38.875 20:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59420 ']' 00:18:38.875 20:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:38.875 20:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:38.875 20:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:38.875 20:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:38.875 20:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:38.875 20:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:18:38.875 [2024-10-01 20:16:33.876139] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:38.875 [2024-10-01 20:16:33.876474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59420 ] 00:18:38.875 [2024-10-01 20:16:34.030897] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:38.875 [2024-10-01 20:16:34.030955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.441 [2024-10-01 20:16:34.441484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.339 20:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.339 20:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:18:41.339 20:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59399 00:18:41.339 20:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:41.339 20:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59399 00:18:42.271 20:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59399 00:18:42.271 20:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59399 ']' 00:18:42.271 20:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59399 00:18:42.271 20:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:18:42.271 20:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:42.271 20:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59399 00:18:42.271 killing process with pid 59399 00:18:42.271 20:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:42.271 20:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:42.271 20:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59399' 00:18:42.272 20:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59399 00:18:42.272 20:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59399 00:18:45.550 20:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59420 00:18:45.550 20:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59420 ']' 00:18:45.550 20:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59420 00:18:45.550 20:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:18:45.550 20:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:45.550 20:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59420 00:18:45.550 20:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:45.550 20:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:45.550 20:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59420' 00:18:45.550 killing process with pid 59420 00:18:45.550 20:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59420 00:18:45.550 20:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59420 00:18:47.475 ************************************ 00:18:47.475 END TEST non_locking_app_on_locked_coremask 00:18:47.475 ************************************ 00:18:47.475 00:18:47.475 real 0m9.715s 00:18:47.475 user 0m10.021s 00:18:47.475 sys 0m1.218s 00:18:47.475 20:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:47.476 20:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:47.476 20:16:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:18:47.476 20:16:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:47.476 20:16:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:47.476 20:16:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:47.476 ************************************ 00:18:47.476 START TEST locking_app_on_unlocked_coremask 00:18:47.476 ************************************ 00:18:47.476 20:16:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:18:47.476 20:16:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59546 00:18:47.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.476 20:16:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59546 /var/tmp/spdk.sock 00:18:47.476 20:16:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59546 ']' 00:18:47.476 20:16:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.476 20:16:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:47.476 20:16:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.476 20:16:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:47.476 20:16:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:47.476 20:16:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:18:47.476 [2024-10-01 20:16:42.390366] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:47.476 [2024-10-01 20:16:42.390490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59546 ] 00:18:47.476 [2024-10-01 20:16:42.540032] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:47.476 [2024-10-01 20:16:42.540074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.734 [2024-10-01 20:16:42.700103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:48.300 20:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:48.300 20:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:18:48.300 20:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59562 00:18:48.300 20:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59562 /var/tmp/spdk2.sock 00:18:48.300 20:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59562 ']' 00:18:48.300 20:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:48.300 20:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:48.300 20:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:48.300 20:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:48.300 20:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:48.300 20:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:48.300 [2024-10-01 20:16:43.421520] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:48.300 [2024-10-01 20:16:43.421643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59562 ] 00:18:48.558 [2024-10-01 20:16:43.569296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.816 [2024-10-01 20:16:43.934306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.189 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.189 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:18:50.189 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59562 00:18:50.189 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59562 00:18:50.189 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:50.757 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59546 00:18:50.757 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59546 ']' 00:18:50.757 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59546 00:18:50.757 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:18:50.757 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:50.757 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59546 00:18:50.757 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:50.757 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:50.757 killing process with pid 59546 00:18:50.757 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59546' 00:18:50.757 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59546 00:18:50.757 20:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59546 00:18:54.948 20:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59562 00:18:54.948 20:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59562 ']' 00:18:54.948 20:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59562 00:18:54.948 20:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:18:54.948 20:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.948 20:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59562 00:18:54.948 20:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:54.948 killing process with pid 59562 00:18:54.948 20:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:54.948 20:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59562' 00:18:54.948 20:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59562 00:18:54.948 20:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59562 00:18:55.881 00:18:55.881 real 0m8.651s 00:18:55.881 user 0m8.783s 00:18:55.881 sys 0m1.167s 00:18:55.881 20:16:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:55.882 20:16:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:55.882 ************************************ 00:18:55.882 END TEST locking_app_on_unlocked_coremask 00:18:55.882 ************************************ 00:18:55.882 20:16:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:18:55.882 20:16:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:55.882 20:16:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:55.882 20:16:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:55.882 ************************************ 00:18:55.882 START TEST locking_app_on_locked_coremask 00:18:55.882 ************************************ 00:18:55.882 20:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:18:55.882 20:16:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59681 00:18:55.882 20:16:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59681 /var/tmp/spdk.sock 00:18:55.882 20:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59681 ']' 00:18:55.882 20:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.882 20:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:55.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.882 20:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.882 20:16:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:55.882 20:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:55.882 20:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:55.882 [2024-10-01 20:16:51.082230] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:55.882 [2024-10-01 20:16:51.082400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59681 ] 00:18:56.140 [2024-10-01 20:16:51.232168] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.398 [2024-10-01 20:16:51.415085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59697 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59697 /var/tmp/spdk2.sock 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59697 /var/tmp/spdk2.sock 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59697 /var/tmp/spdk2.sock 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59697 ']' 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:56.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.962 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:56.962 [2024-10-01 20:16:52.142039] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:56.962 [2024-10-01 20:16:52.142159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59697 ] 00:18:57.219 [2024-10-01 20:16:52.291131] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59681 has claimed it. 00:18:57.219 [2024-10-01 20:16:52.291186] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:18:57.784 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59697) - No such process 00:18:57.784 ERROR: process (pid: 59697) is no longer running 00:18:57.784 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.784 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:18:57.784 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:18:57.784 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:57.784 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:57.784 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:57.784 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59681 00:18:57.785 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59681 00:18:57.785 20:16:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:58.043 20:16:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59681 00:18:58.043 20:16:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59681 ']' 00:18:58.043 20:16:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59681 00:18:58.043 20:16:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:18:58.043 20:16:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:58.043 20:16:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59681 00:18:58.043 20:16:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:58.043 20:16:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:58.043 20:16:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59681' 00:18:58.043 killing process with pid 59681 00:18:58.043 20:16:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59681 00:18:58.043 20:16:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59681 00:18:59.940 00:18:59.940 real 0m3.844s 00:18:59.940 user 0m3.978s 00:18:59.940 sys 0m0.727s 00:18:59.940 20:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:59.940 20:16:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:59.940 ************************************ 00:18:59.940 END TEST locking_app_on_locked_coremask 00:18:59.940 ************************************ 00:18:59.940 20:16:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:18:59.940 20:16:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:59.940 20:16:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:59.940 20:16:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:59.940 ************************************ 00:18:59.940 START TEST locking_overlapped_coremask 00:18:59.940 ************************************ 00:18:59.940 20:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:18:59.940 20:16:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59761 00:18:59.940 20:16:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59761 /var/tmp/spdk.sock 00:18:59.940 20:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59761 ']' 00:18:59.940 20:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.940 20:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.940 20:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.940 20:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.940 20:16:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:59.940 20:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:59.940 [2024-10-01 20:16:54.954664] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:18:59.940 [2024-10-01 20:16:54.954770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59761 ] 00:18:59.940 [2024-10-01 20:16:55.098685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:00.197 [2024-10-01 20:16:55.312746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.197 [2024-10-01 20:16:55.312819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.197 [2024-10-01 20:16:55.313031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59779 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59779 /var/tmp/spdk2.sock 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59779 /var/tmp/spdk2.sock 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59779 /var/tmp/spdk2.sock 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59779 ']' 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:01.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.129 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:01.129 [2024-10-01 20:16:56.185183] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:19:01.129 [2024-10-01 20:16:56.185303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59779 ] 00:19:01.387 [2024-10-01 20:16:56.340009] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59761 has claimed it. 00:19:01.387 [2024-10-01 20:16:56.340070] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:19:01.644 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59779) - No such process 00:19:01.644 ERROR: process (pid: 59779) is no longer running 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59761 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59761 ']' 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59761 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59761 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:01.644 killing process with pid 59761 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59761' 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59761 00:19:01.644 20:16:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59761 00:19:04.172 00:19:04.172 real 0m4.087s 00:19:04.172 user 0m10.839s 00:19:04.172 sys 0m0.515s 00:19:04.172 20:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:04.172 20:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:04.172 ************************************ 00:19:04.172 END TEST locking_overlapped_coremask 00:19:04.172 ************************************ 00:19:04.172 20:16:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:19:04.172 20:16:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:04.172 20:16:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:04.172 20:16:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:04.172 ************************************ 00:19:04.172 START TEST locking_overlapped_coremask_via_rpc 00:19:04.172 ************************************ 00:19:04.172 20:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:19:04.172 20:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59843 00:19:04.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.172 20:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59843 /var/tmp/spdk.sock 00:19:04.172 20:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59843 ']' 00:19:04.172 20:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.172 20:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:19:04.172 20:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.172 20:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.172 20:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.172 20:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.172 [2024-10-01 20:16:59.090635] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:19:04.172 [2024-10-01 20:16:59.090771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59843 ] 00:19:04.172 [2024-10-01 20:16:59.242521] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:04.172 [2024-10-01 20:16:59.242578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:04.429 [2024-10-01 20:16:59.461364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.429 [2024-10-01 20:16:59.461702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.429 [2024-10-01 20:16:59.461729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.359 20:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.359 20:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:19:05.359 20:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59861 00:19:05.359 20:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59861 /var/tmp/spdk2.sock 00:19:05.359 20:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:19:05.359 20:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59861 ']' 00:19:05.359 20:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:05.359 20:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.359 20:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:05.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:05.359 20:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.359 20:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:05.359 [2024-10-01 20:17:00.396026] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:19:05.359 [2024-10-01 20:17:00.396480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59861 ] 00:19:05.359 [2024-10-01 20:17:00.544063] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:05.359 [2024-10-01 20:17:00.544110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:05.924 [2024-10-01 20:17:00.872521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:05.924 [2024-10-01 20:17:00.875747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.924 [2024-10-01 20:17:00.875763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:19:07.295 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.295 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:19:07.295 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:19:07.295 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.295 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.295 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.295 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:19:07.295 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:19:07.295 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:19:07.295 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:07.295 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.295 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.296 [2024-10-01 20:17:02.200821] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59843 has claimed it. 00:19:07.296 request: 00:19:07.296 { 00:19:07.296 "method": "framework_enable_cpumask_locks", 00:19:07.296 "req_id": 1 00:19:07.296 } 00:19:07.296 Got JSON-RPC error response 00:19:07.296 response: 00:19:07.296 { 00:19:07.296 "code": -32603, 00:19:07.296 "message": "Failed to claim CPU core: 2" 00:19:07.296 } 00:19:07.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59843 /var/tmp/spdk.sock 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59843 ']' 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59861 /var/tmp/spdk2.sock 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59861 ']' 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:07.296 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.553 ************************************ 00:19:07.553 END TEST locking_overlapped_coremask_via_rpc 00:19:07.553 ************************************ 00:19:07.553 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.553 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:19:07.553 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:19:07.554 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:19:07.554 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:19:07.554 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:19:07.554 00:19:07.554 real 0m3.613s 00:19:07.554 user 0m1.100s 00:19:07.554 sys 0m0.126s 00:19:07.554 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:07.554 20:17:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.554 20:17:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:19:07.554 20:17:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59843 ]] 00:19:07.554 20:17:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59843 00:19:07.554 20:17:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59843 ']' 00:19:07.554 20:17:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59843 00:19:07.554 20:17:02 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:19:07.554 20:17:02 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:07.554 20:17:02 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59843 00:19:07.554 20:17:02 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:07.554 20:17:02 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:07.554 20:17:02 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59843' 00:19:07.554 killing process with pid 59843 00:19:07.554 20:17:02 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59843 00:19:07.554 20:17:02 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59843 00:19:09.449 20:17:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59861 ]] 00:19:09.449 20:17:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59861 00:19:09.449 20:17:04 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59861 ']' 00:19:09.449 20:17:04 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59861 00:19:09.449 20:17:04 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:19:09.449 20:17:04 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:09.449 20:17:04 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59861 00:19:09.449 killing process with pid 59861 00:19:09.449 20:17:04 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:09.449 20:17:04 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:09.450 20:17:04 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59861' 00:19:09.450 20:17:04 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59861 00:19:09.450 20:17:04 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59861 00:19:11.403 20:17:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:19:11.403 20:17:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:19:11.403 20:17:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59843 ]] 00:19:11.403 20:17:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59843 00:19:11.403 20:17:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59843 ']' 00:19:11.403 20:17:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59843 00:19:11.403 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59843) - No such process 00:19:11.403 Process with pid 59843 is not found 00:19:11.403 20:17:06 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59843 is not found' 00:19:11.403 20:17:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59861 ]] 00:19:11.403 Process with pid 59861 is not found 00:19:11.403 20:17:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59861 00:19:11.403 20:17:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59861 ']' 00:19:11.403 20:17:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59861 00:19:11.403 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59861) - No such process 00:19:11.403 20:17:06 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59861 is not found' 00:19:11.403 20:17:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:19:11.403 ************************************ 00:19:11.403 END TEST cpu_locks 00:19:11.404 ************************************ 00:19:11.404 00:19:11.404 real 0m40.591s 00:19:11.404 user 1m7.575s 00:19:11.404 sys 0m5.876s 00:19:11.404 20:17:06 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:11.404 20:17:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:11.404 ************************************ 00:19:11.404 END TEST event 00:19:11.404 ************************************ 00:19:11.404 00:19:11.404 real 1m7.496s 00:19:11.404 user 2m0.394s 00:19:11.404 sys 0m9.082s 00:19:11.404 20:17:06 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:11.404 20:17:06 event -- common/autotest_common.sh@10 -- # set +x 00:19:11.404 20:17:06 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:19:11.404 20:17:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:11.404 20:17:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:11.404 20:17:06 -- common/autotest_common.sh@10 -- # set +x 00:19:11.404 ************************************ 00:19:11.404 START TEST thread 00:19:11.404 ************************************ 00:19:11.404 20:17:06 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:19:11.404 * Looking for test storage... 00:19:11.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:19:11.404 20:17:06 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:11.404 20:17:06 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:19:11.404 20:17:06 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:11.404 20:17:06 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:11.404 20:17:06 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:11.404 20:17:06 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:11.404 20:17:06 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:11.404 20:17:06 thread -- scripts/common.sh@336 -- # IFS=.-: 00:19:11.404 20:17:06 thread -- scripts/common.sh@336 -- # read -ra ver1 00:19:11.404 20:17:06 thread -- scripts/common.sh@337 -- # IFS=.-: 00:19:11.404 20:17:06 thread -- scripts/common.sh@337 -- # read -ra ver2 00:19:11.404 20:17:06 thread -- scripts/common.sh@338 -- # local 'op=<' 00:19:11.404 20:17:06 thread -- scripts/common.sh@340 -- # ver1_l=2 00:19:11.404 20:17:06 thread -- scripts/common.sh@341 -- # ver2_l=1 00:19:11.404 20:17:06 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:11.404 20:17:06 thread -- scripts/common.sh@344 -- # case "$op" in 00:19:11.404 20:17:06 thread -- scripts/common.sh@345 -- # : 1 00:19:11.404 20:17:06 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:11.404 20:17:06 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.404 20:17:06 thread -- scripts/common.sh@365 -- # decimal 1 00:19:11.404 20:17:06 thread -- scripts/common.sh@353 -- # local d=1 00:19:11.404 20:17:06 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:11.404 20:17:06 thread -- scripts/common.sh@355 -- # echo 1 00:19:11.404 20:17:06 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:19:11.404 20:17:06 thread -- scripts/common.sh@366 -- # decimal 2 00:19:11.404 20:17:06 thread -- scripts/common.sh@353 -- # local d=2 00:19:11.404 20:17:06 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:11.404 20:17:06 thread -- scripts/common.sh@355 -- # echo 2 00:19:11.404 20:17:06 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:19:11.404 20:17:06 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:11.404 20:17:06 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:11.404 20:17:06 thread -- scripts/common.sh@368 -- # return 0 00:19:11.404 20:17:06 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.404 20:17:06 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:11.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.404 --rc genhtml_branch_coverage=1 00:19:11.404 --rc genhtml_function_coverage=1 00:19:11.404 --rc genhtml_legend=1 00:19:11.404 --rc geninfo_all_blocks=1 00:19:11.404 --rc geninfo_unexecuted_blocks=1 00:19:11.404 00:19:11.404 ' 00:19:11.404 20:17:06 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:11.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.404 --rc genhtml_branch_coverage=1 00:19:11.404 --rc genhtml_function_coverage=1 00:19:11.404 --rc genhtml_legend=1 00:19:11.404 --rc geninfo_all_blocks=1 00:19:11.404 --rc geninfo_unexecuted_blocks=1 00:19:11.404 00:19:11.404 ' 00:19:11.404 20:17:06 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:11.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.404 --rc genhtml_branch_coverage=1 00:19:11.404 --rc genhtml_function_coverage=1 00:19:11.404 --rc genhtml_legend=1 00:19:11.404 --rc geninfo_all_blocks=1 00:19:11.404 --rc geninfo_unexecuted_blocks=1 00:19:11.404 00:19:11.404 ' 00:19:11.404 20:17:06 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:11.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.404 --rc genhtml_branch_coverage=1 00:19:11.404 --rc genhtml_function_coverage=1 00:19:11.404 --rc genhtml_legend=1 00:19:11.404 --rc geninfo_all_blocks=1 00:19:11.404 --rc geninfo_unexecuted_blocks=1 00:19:11.404 00:19:11.404 ' 00:19:11.404 20:17:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:19:11.404 20:17:06 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:19:11.404 20:17:06 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:11.404 20:17:06 thread -- common/autotest_common.sh@10 -- # set +x 00:19:11.404 ************************************ 00:19:11.404 START TEST thread_poller_perf 00:19:11.404 ************************************ 00:19:11.404 20:17:06 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:19:11.404 [2024-10-01 20:17:06.347988] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:19:11.404 [2024-10-01 20:17:06.348211] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60032 ] 00:19:11.404 [2024-10-01 20:17:06.497232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.662 [2024-10-01 20:17:06.686161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.662 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:19:13.040 ====================================== 00:19:13.040 busy:2616594416 (cyc) 00:19:13.040 total_run_count: 301000 00:19:13.040 tsc_hz: 2600000000 (cyc) 00:19:13.040 ====================================== 00:19:13.040 poller_cost: 8693 (cyc), 3343 (nsec) 00:19:13.040 00:19:13.040 real 0m1.648s 00:19:13.040 user 0m1.467s 00:19:13.040 sys 0m0.072s 00:19:13.040 20:17:07 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:13.040 20:17:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:19:13.040 ************************************ 00:19:13.040 END TEST thread_poller_perf 00:19:13.040 ************************************ 00:19:13.040 20:17:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:19:13.040 20:17:07 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:19:13.040 20:17:07 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:13.040 20:17:07 thread -- common/autotest_common.sh@10 -- # set +x 00:19:13.040 ************************************ 00:19:13.040 START TEST thread_poller_perf 00:19:13.040 ************************************ 00:19:13.040 20:17:08 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:19:13.040 [2024-10-01 20:17:08.034872] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:19:13.040 [2024-10-01 20:17:08.035106] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60068 ] 00:19:13.040 [2024-10-01 20:17:08.185390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.299 [2024-10-01 20:17:08.375492] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.299 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:19:14.674 ====================================== 00:19:14.674 busy:2603191086 (cyc) 00:19:14.674 total_run_count: 3953000 00:19:14.674 tsc_hz: 2600000000 (cyc) 00:19:14.674 ====================================== 00:19:14.674 poller_cost: 658 (cyc), 253 (nsec) 00:19:14.674 00:19:14.674 real 0m1.641s 00:19:14.674 user 0m1.447s 00:19:14.674 sys 0m0.085s 00:19:14.674 20:17:09 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:14.674 ************************************ 00:19:14.674 END TEST thread_poller_perf 00:19:14.674 ************************************ 00:19:14.674 20:17:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:19:14.674 20:17:09 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:19:14.674 ************************************ 00:19:14.674 END TEST thread 00:19:14.674 ************************************ 00:19:14.674 00:19:14.674 real 0m3.523s 00:19:14.674 user 0m3.021s 00:19:14.674 sys 0m0.266s 00:19:14.674 20:17:09 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:14.674 20:17:09 thread -- common/autotest_common.sh@10 -- # set +x 00:19:14.674 20:17:09 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:19:14.674 20:17:09 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:19:14.674 20:17:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:14.674 20:17:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:14.674 20:17:09 -- common/autotest_common.sh@10 -- # set +x 00:19:14.674 ************************************ 00:19:14.674 START TEST app_cmdline 00:19:14.674 ************************************ 00:19:14.674 20:17:09 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:19:14.674 * Looking for test storage... 00:19:14.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:19:14.674 20:17:09 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:14.674 20:17:09 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:14.674 20:17:09 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:19:14.674 20:17:09 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@345 -- # : 1 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:14.674 20:17:09 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:14.940 20:17:09 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:19:14.940 20:17:09 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:19:14.940 20:17:09 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:14.940 20:17:09 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:19:14.940 20:17:09 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:19:14.940 20:17:09 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:19:14.940 20:17:09 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:19:14.940 20:17:09 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:14.940 20:17:09 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:19:14.940 20:17:09 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:19:14.940 20:17:09 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:14.940 20:17:09 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:14.940 20:17:09 app_cmdline -- scripts/common.sh@368 -- # return 0 00:19:14.940 20:17:09 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:14.940 20:17:09 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:14.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.940 --rc genhtml_branch_coverage=1 00:19:14.940 --rc genhtml_function_coverage=1 00:19:14.940 --rc genhtml_legend=1 00:19:14.940 --rc geninfo_all_blocks=1 00:19:14.940 --rc geninfo_unexecuted_blocks=1 00:19:14.940 00:19:14.940 ' 00:19:14.940 20:17:09 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:14.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.940 --rc genhtml_branch_coverage=1 00:19:14.940 --rc genhtml_function_coverage=1 00:19:14.940 --rc genhtml_legend=1 00:19:14.940 --rc geninfo_all_blocks=1 00:19:14.940 --rc geninfo_unexecuted_blocks=1 00:19:14.940 00:19:14.940 ' 00:19:14.940 20:17:09 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:14.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.940 --rc genhtml_branch_coverage=1 00:19:14.940 --rc genhtml_function_coverage=1 00:19:14.940 --rc genhtml_legend=1 00:19:14.940 --rc geninfo_all_blocks=1 00:19:14.940 --rc geninfo_unexecuted_blocks=1 00:19:14.940 00:19:14.940 ' 00:19:14.940 20:17:09 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:14.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.940 --rc genhtml_branch_coverage=1 00:19:14.940 --rc genhtml_function_coverage=1 00:19:14.941 --rc genhtml_legend=1 00:19:14.941 --rc geninfo_all_blocks=1 00:19:14.941 --rc geninfo_unexecuted_blocks=1 00:19:14.941 00:19:14.941 ' 00:19:14.941 20:17:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:19:14.941 20:17:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60152 00:19:14.941 20:17:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60152 00:19:14.941 20:17:09 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 60152 ']' 00:19:14.941 20:17:09 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.941 20:17:09 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.941 20:17:09 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.941 20:17:09 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.941 20:17:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:19:14.941 20:17:09 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:19:14.941 [2024-10-01 20:17:09.974060] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:19:14.941 [2024-10-01 20:17:09.974338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60152 ] 00:19:14.941 [2024-10-01 20:17:10.120626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.198 [2024-10-01 20:17:10.314339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.190 20:17:11 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:16.190 20:17:11 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:19:16.190 20:17:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:19:16.190 { 00:19:16.190 "version": "SPDK v25.01-pre git sha1 0c2005fb5", 00:19:16.190 "fields": { 00:19:16.190 "major": 25, 00:19:16.190 "minor": 1, 00:19:16.190 "patch": 0, 00:19:16.190 "suffix": "-pre", 00:19:16.190 "commit": "0c2005fb5" 00:19:16.190 } 00:19:16.190 } 00:19:16.190 20:17:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:19:16.190 20:17:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:19:16.190 20:17:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:19:16.190 20:17:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:19:16.190 20:17:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:19:16.190 20:17:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:19:16.190 20:17:11 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.190 20:17:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:19:16.191 20:17:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:19:16.191 20:17:11 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.191 20:17:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:19:16.191 20:17:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:19:16.191 20:17:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:16.191 20:17:11 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:19:16.191 20:17:11 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:16.191 20:17:11 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:16.191 20:17:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.191 20:17:11 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:16.191 20:17:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.191 20:17:11 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:16.191 20:17:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.191 20:17:11 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:16.191 20:17:11 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:16.191 20:17:11 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:16.448 request: 00:19:16.448 { 00:19:16.448 "method": "env_dpdk_get_mem_stats", 00:19:16.448 "req_id": 1 00:19:16.448 } 00:19:16.448 Got JSON-RPC error response 00:19:16.448 response: 00:19:16.448 { 00:19:16.448 "code": -32601, 00:19:16.448 "message": "Method not found" 00:19:16.448 } 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:16.448 20:17:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60152 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 60152 ']' 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 60152 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60152 00:19:16.448 killing process with pid 60152 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60152' 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@969 -- # kill 60152 00:19:16.448 20:17:11 app_cmdline -- common/autotest_common.sh@974 -- # wait 60152 00:19:18.348 ************************************ 00:19:18.348 END TEST app_cmdline 00:19:18.348 ************************************ 00:19:18.348 00:19:18.348 real 0m3.755s 00:19:18.348 user 0m3.990s 00:19:18.348 sys 0m0.498s 00:19:18.349 20:17:13 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:18.349 20:17:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:19:18.349 20:17:13 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:19:18.349 20:17:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:18.349 20:17:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:18.349 20:17:13 -- common/autotest_common.sh@10 -- # set +x 00:19:18.349 ************************************ 00:19:18.349 START TEST version 00:19:18.349 ************************************ 00:19:18.349 20:17:13 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:19:18.607 * Looking for test storage... 00:19:18.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:19:18.607 20:17:13 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:18.607 20:17:13 version -- common/autotest_common.sh@1681 -- # lcov --version 00:19:18.607 20:17:13 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:18.607 20:17:13 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:18.607 20:17:13 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.607 20:17:13 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.607 20:17:13 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.607 20:17:13 version -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.607 20:17:13 version -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.607 20:17:13 version -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.607 20:17:13 version -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.607 20:17:13 version -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.607 20:17:13 version -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.607 20:17:13 version -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.607 20:17:13 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.607 20:17:13 version -- scripts/common.sh@344 -- # case "$op" in 00:19:18.607 20:17:13 version -- scripts/common.sh@345 -- # : 1 00:19:18.607 20:17:13 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.607 20:17:13 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.607 20:17:13 version -- scripts/common.sh@365 -- # decimal 1 00:19:18.607 20:17:13 version -- scripts/common.sh@353 -- # local d=1 00:19:18.607 20:17:13 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.607 20:17:13 version -- scripts/common.sh@355 -- # echo 1 00:19:18.607 20:17:13 version -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.607 20:17:13 version -- scripts/common.sh@366 -- # decimal 2 00:19:18.607 20:17:13 version -- scripts/common.sh@353 -- # local d=2 00:19:18.607 20:17:13 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.607 20:17:13 version -- scripts/common.sh@355 -- # echo 2 00:19:18.607 20:17:13 version -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.607 20:17:13 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.607 20:17:13 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.607 20:17:13 version -- scripts/common.sh@368 -- # return 0 00:19:18.607 20:17:13 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.607 20:17:13 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:18.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.607 --rc genhtml_branch_coverage=1 00:19:18.607 --rc genhtml_function_coverage=1 00:19:18.607 --rc genhtml_legend=1 00:19:18.607 --rc geninfo_all_blocks=1 00:19:18.607 --rc geninfo_unexecuted_blocks=1 00:19:18.607 00:19:18.607 ' 00:19:18.607 20:17:13 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:18.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.607 --rc genhtml_branch_coverage=1 00:19:18.607 --rc genhtml_function_coverage=1 00:19:18.607 --rc genhtml_legend=1 00:19:18.607 --rc geninfo_all_blocks=1 00:19:18.607 --rc geninfo_unexecuted_blocks=1 00:19:18.607 00:19:18.607 ' 00:19:18.607 20:17:13 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:18.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.607 --rc genhtml_branch_coverage=1 00:19:18.607 --rc genhtml_function_coverage=1 00:19:18.607 --rc genhtml_legend=1 00:19:18.607 --rc geninfo_all_blocks=1 00:19:18.607 --rc geninfo_unexecuted_blocks=1 00:19:18.607 00:19:18.607 ' 00:19:18.607 20:17:13 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:18.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.607 --rc genhtml_branch_coverage=1 00:19:18.607 --rc genhtml_function_coverage=1 00:19:18.607 --rc genhtml_legend=1 00:19:18.607 --rc geninfo_all_blocks=1 00:19:18.607 --rc geninfo_unexecuted_blocks=1 00:19:18.607 00:19:18.607 ' 00:19:18.607 20:17:13 version -- app/version.sh@17 -- # get_header_version major 00:19:18.607 20:17:13 version -- app/version.sh@14 -- # cut -f2 00:19:18.607 20:17:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:19:18.607 20:17:13 version -- app/version.sh@14 -- # tr -d '"' 00:19:18.607 20:17:13 version -- app/version.sh@17 -- # major=25 00:19:18.607 20:17:13 version -- app/version.sh@18 -- # get_header_version minor 00:19:18.607 20:17:13 version -- app/version.sh@14 -- # tr -d '"' 00:19:18.607 20:17:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:19:18.607 20:17:13 version -- app/version.sh@14 -- # cut -f2 00:19:18.607 20:17:13 version -- app/version.sh@18 -- # minor=1 00:19:18.607 20:17:13 version -- app/version.sh@19 -- # get_header_version patch 00:19:18.607 20:17:13 version -- app/version.sh@14 -- # cut -f2 00:19:18.608 20:17:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:19:18.608 20:17:13 version -- app/version.sh@14 -- # tr -d '"' 00:19:18.608 20:17:13 version -- app/version.sh@19 -- # patch=0 00:19:18.608 20:17:13 version -- app/version.sh@20 -- # get_header_version suffix 00:19:18.608 20:17:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:19:18.608 20:17:13 version -- app/version.sh@14 -- # cut -f2 00:19:18.608 20:17:13 version -- app/version.sh@14 -- # tr -d '"' 00:19:18.608 20:17:13 version -- app/version.sh@20 -- # suffix=-pre 00:19:18.608 20:17:13 version -- app/version.sh@22 -- # version=25.1 00:19:18.608 20:17:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:19:18.608 20:17:13 version -- app/version.sh@28 -- # version=25.1rc0 00:19:18.608 20:17:13 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:18.608 20:17:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:19:18.608 20:17:13 version -- app/version.sh@30 -- # py_version=25.1rc0 00:19:18.608 20:17:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:19:18.608 ************************************ 00:19:18.608 END TEST version 00:19:18.608 ************************************ 00:19:18.608 00:19:18.608 real 0m0.206s 00:19:18.608 user 0m0.134s 00:19:18.608 sys 0m0.096s 00:19:18.608 20:17:13 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:18.608 20:17:13 version -- common/autotest_common.sh@10 -- # set +x 00:19:18.608 20:17:13 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:19:18.608 20:17:13 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:19:18.608 20:17:13 -- spdk/autotest.sh@194 -- # uname -s 00:19:18.608 20:17:13 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:18.608 20:17:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:18.608 20:17:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:18.608 20:17:13 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:19:18.608 20:17:13 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:19:18.608 20:17:13 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:18.608 20:17:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:18.608 20:17:13 -- common/autotest_common.sh@10 -- # set +x 00:19:18.608 ************************************ 00:19:18.608 START TEST blockdev_nvme 00:19:18.608 ************************************ 00:19:18.608 20:17:13 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:19:18.867 * Looking for test storage... 00:19:18.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.867 20:17:13 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:18.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.867 --rc genhtml_branch_coverage=1 00:19:18.867 --rc genhtml_function_coverage=1 00:19:18.867 --rc genhtml_legend=1 00:19:18.867 --rc geninfo_all_blocks=1 00:19:18.867 --rc geninfo_unexecuted_blocks=1 00:19:18.867 00:19:18.867 ' 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:18.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.867 --rc genhtml_branch_coverage=1 00:19:18.867 --rc genhtml_function_coverage=1 00:19:18.867 --rc genhtml_legend=1 00:19:18.867 --rc geninfo_all_blocks=1 00:19:18.867 --rc geninfo_unexecuted_blocks=1 00:19:18.867 00:19:18.867 ' 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:18.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.867 --rc genhtml_branch_coverage=1 00:19:18.867 --rc genhtml_function_coverage=1 00:19:18.867 --rc genhtml_legend=1 00:19:18.867 --rc geninfo_all_blocks=1 00:19:18.867 --rc geninfo_unexecuted_blocks=1 00:19:18.867 00:19:18.867 ' 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:18.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.867 --rc genhtml_branch_coverage=1 00:19:18.867 --rc genhtml_function_coverage=1 00:19:18.867 --rc genhtml_legend=1 00:19:18.867 --rc geninfo_all_blocks=1 00:19:18.867 --rc geninfo_unexecuted_blocks=1 00:19:18.867 00:19:18.867 ' 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:18.867 20:17:13 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60335 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60335 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 60335 ']' 00:19:18.867 20:17:13 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.867 20:17:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.867 [2024-10-01 20:17:14.023330] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:19:18.867 [2024-10-01 20:17:14.023596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60335 ] 00:19:19.125 [2024-10-01 20:17:14.175821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.383 [2024-10-01 20:17:14.365297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.317 20:17:15 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.317 20:17:15 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:19:20.317 20:17:15 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:20.317 20:17:15 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:19:20.317 20:17:15 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:19:20.317 20:17:15 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:19:20.317 20:17:15 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:20.317 20:17:15 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:19:20.317 20:17:15 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.317 20:17:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.317 20:17:15 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.317 20:17:15 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:20.317 20:17:15 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.317 20:17:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.317 20:17:15 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.317 20:17:15 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:19:20.317 20:17:15 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:20.317 20:17:15 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.317 20:17:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.317 20:17:15 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.317 20:17:15 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:20.317 20:17:15 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.317 20:17:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.575 20:17:15 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.575 20:17:15 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:20.575 20:17:15 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.575 20:17:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.575 20:17:15 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.575 20:17:15 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:20.575 20:17:15 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:20.575 20:17:15 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.575 20:17:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.575 20:17:15 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:20.575 20:17:15 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.575 20:17:15 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:20.575 20:17:15 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:20.576 20:17:15 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "392de506-197a-4157-ba35-c43228f320a8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "392de506-197a-4157-ba35-c43228f320a8",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f32d778f-3627-4841-b4bd-6c2ddf975d2a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f32d778f-3627-4841-b4bd-6c2ddf975d2a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9a852ad4-28fe-4b21-aecf-5ab4cce63c4f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9a852ad4-28fe-4b21-aecf-5ab4cce63c4f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8f94fd3f-8bc5-4b61-ad8c-525c0d7b5b61"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8f94fd3f-8bc5-4b61-ad8c-525c0d7b5b61",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f8fe903f-feb2-4d6d-b940-97b81fe1dc68"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f8fe903f-feb2-4d6d-b940-97b81fe1dc68",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2b097fcd-cb52-47ad-a778-db2c3ccd62a5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2b097fcd-cb52-47ad-a778-db2c3ccd62a5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:19:20.576 20:17:15 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:20.576 20:17:15 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:19:20.576 20:17:15 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:20.576 20:17:15 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60335 00:19:20.576 20:17:15 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 60335 ']' 00:19:20.576 20:17:15 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 60335 00:19:20.576 20:17:15 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:19:20.576 20:17:15 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:20.576 20:17:15 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60335 00:19:20.576 killing process with pid 60335 00:19:20.576 20:17:15 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:20.576 20:17:15 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:20.576 20:17:15 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60335' 00:19:20.576 20:17:15 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 60335 00:19:20.576 20:17:15 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 60335 00:19:23.103 20:17:17 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:23.103 20:17:17 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:19:23.103 20:17:17 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:23.103 20:17:17 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:23.103 20:17:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:23.103 ************************************ 00:19:23.103 START TEST bdev_hello_world 00:19:23.103 ************************************ 00:19:23.103 20:17:17 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:19:23.103 [2024-10-01 20:17:17.784588] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:19:23.103 [2024-10-01 20:17:17.784958] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60430 ] 00:19:23.103 [2024-10-01 20:17:17.951300] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.103 [2024-10-01 20:17:18.146362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.035 [2024-10-01 20:17:18.885646] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:24.035 [2024-10-01 20:17:18.885711] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:19:24.035 [2024-10-01 20:17:18.885732] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:24.035 [2024-10-01 20:17:18.888324] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:24.035 [2024-10-01 20:17:18.888763] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:24.035 [2024-10-01 20:17:18.888789] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:24.035 [2024-10-01 20:17:18.888930] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:24.035 00:19:24.035 [2024-10-01 20:17:18.888952] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:25.407 00:19:25.408 real 0m2.519s 00:19:25.408 user 0m2.136s 00:19:25.408 sys 0m0.254s 00:19:25.408 20:17:20 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:25.408 20:17:20 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:25.408 ************************************ 00:19:25.408 END TEST bdev_hello_world 00:19:25.408 ************************************ 00:19:25.408 20:17:20 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:25.408 20:17:20 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:25.408 20:17:20 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:25.408 20:17:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:25.408 ************************************ 00:19:25.408 START TEST bdev_bounds 00:19:25.408 ************************************ 00:19:25.408 20:17:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:19:25.408 20:17:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60477 00:19:25.408 20:17:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:25.408 Process bdevio pid: 60477 00:19:25.408 20:17:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60477' 00:19:25.408 20:17:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60477 00:19:25.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.408 20:17:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 60477 ']' 00:19:25.408 20:17:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:25.408 20:17:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.408 20:17:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.408 20:17:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.408 20:17:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.408 20:17:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:25.408 [2024-10-01 20:17:20.320354] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:19:25.408 [2024-10-01 20:17:20.320630] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60477 ] 00:19:25.408 [2024-10-01 20:17:20.469667] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:25.666 [2024-10-01 20:17:20.664254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.666 [2024-10-01 20:17:20.664617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.666 [2024-10-01 20:17:20.664670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.637 20:17:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.637 20:17:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:19:26.637 20:17:21 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:26.637 I/O targets: 00:19:26.637 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:26.637 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:26.637 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:26.637 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:26.637 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:26.637 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:26.637 00:19:26.637 00:19:26.637 CUnit - A unit testing framework for C - Version 2.1-3 00:19:26.637 http://cunit.sourceforge.net/ 00:19:26.637 00:19:26.637 00:19:26.637 Suite: bdevio tests on: Nvme3n1 00:19:26.637 Test: blockdev write read block ...passed 00:19:26.637 Test: blockdev write zeroes read block ...passed 00:19:26.637 Test: blockdev write zeroes read no split ...passed 00:19:26.637 Test: blockdev write zeroes read split ...passed 00:19:26.637 Test: blockdev write zeroes read split partial ...passed 00:19:26.637 Test: blockdev reset ...[2024-10-01 20:17:21.630311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:19:26.637 [2024-10-01 20:17:21.633189] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:26.637 passed 00:19:26.637 Test: blockdev write read 8 blocks ...passed 00:19:26.637 Test: blockdev write read size > 128k ...passed 00:19:26.637 Test: blockdev write read invalid size ...passed 00:19:26.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.637 Test: blockdev write read max offset ...passed 00:19:26.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.637 Test: blockdev writev readv 8 blocks ...passed 00:19:26.637 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.637 Test: blockdev writev readv block ...passed 00:19:26.637 Test: blockdev writev readv size > 128k ...passed 00:19:26.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.637 Test: blockdev comparev and writev ...[2024-10-01 20:17:21.640141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2a0a000 len:0x1000 00:19:26.637 [2024-10-01 20:17:21.640521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:26.637 passed 00:19:26.637 Test: blockdev nvme passthru rw ...passed 00:19:26.637 Test: blockdev nvme passthru vendor specific ...[2024-10-01 20:17:21.641574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:19:26.637 Test: blockdev nvme admin passthru ...RP2 0x0 00:19:26.637 [2024-10-01 20:17:21.641842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:26.637 passed 00:19:26.637 Test: blockdev copy ...passed 00:19:26.637 Suite: bdevio tests on: Nvme2n3 00:19:26.637 Test: blockdev write read block ...passed 00:19:26.637 Test: blockdev write zeroes read block ...passed 00:19:26.637 Test: blockdev write zeroes read no split ...passed 00:19:26.638 Test: blockdev write zeroes read split ...passed 00:19:26.638 Test: blockdev write zeroes read split partial ...passed 00:19:26.638 Test: blockdev reset ...[2024-10-01 20:17:21.697937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:19:26.638 [2024-10-01 20:17:21.701055] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:26.638 passed 00:19:26.638 Test: blockdev write read 8 blocks ...passed 00:19:26.638 Test: blockdev write read size > 128k ...passed 00:19:26.638 Test: blockdev write read invalid size ...passed 00:19:26.638 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.638 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.638 Test: blockdev write read max offset ...passed 00:19:26.638 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.638 Test: blockdev writev readv 8 blocks ...passed 00:19:26.638 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.638 Test: blockdev writev readv block ...passed 00:19:26.638 Test: blockdev writev readv size > 128k ...passed 00:19:26.638 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.638 Test: blockdev comparev and writev ...[2024-10-01 20:17:21.708133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x292204000 len:0x1000 00:19:26.638 [2024-10-01 20:17:21.708188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:26.638 passed 00:19:26.638 Test: blockdev nvme passthru rw ...passed 00:19:26.638 Test: blockdev nvme passthru vendor specific ...[2024-10-01 20:17:21.708710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:19:26.638 [2024-10-01 20:17:21.708735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:26.638 passed 00:19:26.638 Test: blockdev nvme admin passthru ...passed 00:19:26.638 Test: blockdev copy ...passed 00:19:26.638 Suite: bdevio tests on: Nvme2n2 00:19:26.638 Test: blockdev write read block ...passed 00:19:26.638 Test: blockdev write zeroes read block ...passed 00:19:26.638 Test: blockdev write zeroes read no split ...passed 00:19:26.638 Test: blockdev write zeroes read split ...passed 00:19:26.638 Test: blockdev write zeroes read split partial ...passed 00:19:26.638 Test: blockdev reset ...[2024-10-01 20:17:21.766727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:19:26.638 [2024-10-01 20:17:21.769796] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:26.638 passed 00:19:26.638 Test: blockdev write read 8 blocks ...passed 00:19:26.638 Test: blockdev write read size > 128k ...passed 00:19:26.638 Test: blockdev write read invalid size ...passed 00:19:26.638 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.638 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.638 Test: blockdev write read max offset ...passed 00:19:26.638 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.638 Test: blockdev writev readv 8 blocks ...passed 00:19:26.638 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.638 Test: blockdev writev readv block ...passed 00:19:26.638 Test: blockdev writev readv size > 128k ...passed 00:19:26.638 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.638 Test: blockdev comparev and writev ...[2024-10-01 20:17:21.776351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf83a000 len:0x1000 00:19:26.638 [2024-10-01 20:17:21.776423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:26.638 passed 00:19:26.638 Test: blockdev nvme passthru rw ...passed 00:19:26.638 Test: blockdev nvme passthru vendor specific ...[2024-10-01 20:17:21.777099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:19:26.638 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:19:26.638 [2024-10-01 20:17:21.777268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:26.638 passed 00:19:26.638 Test: blockdev copy ...passed 00:19:26.638 Suite: bdevio tests on: Nvme2n1 00:19:26.638 Test: blockdev write read block ...passed 00:19:26.638 Test: blockdev write zeroes read block ...passed 00:19:26.638 Test: blockdev write zeroes read no split ...passed 00:19:26.638 Test: blockdev write zeroes read split ...passed 00:19:26.933 Test: blockdev write zeroes read split partial ...passed 00:19:26.933 Test: blockdev reset ...[2024-10-01 20:17:21.839672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:19:26.933 [2024-10-01 20:17:21.843605] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:26.933 passed 00:19:26.933 Test: blockdev write read 8 blocks ...passed 00:19:26.933 Test: blockdev write read size > 128k ...passed 00:19:26.933 Test: blockdev write read invalid size ...passed 00:19:26.933 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.933 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.933 Test: blockdev write read max offset ...passed 00:19:26.933 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.933 Test: blockdev writev readv 8 blocks ...passed 00:19:26.934 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.934 Test: blockdev writev readv block ...passed 00:19:26.934 Test: blockdev writev readv size > 128k ...passed 00:19:26.934 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.934 Test: blockdev comparev and writev ...[2024-10-01 20:17:21.850773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf834000 len:0x1000 00:19:26.934 [2024-10-01 20:17:21.850979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:26.934 passed 00:19:26.934 Test: blockdev nvme passthru rw ...passed 00:19:26.934 Test: blockdev nvme passthru vendor specific ...passed 00:19:26.934 Test: blockdev nvme admin passthru ...[2024-10-01 20:17:21.852404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:19:26.934 [2024-10-01 20:17:21.852451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:26.934 passed 00:19:26.934 Test: blockdev copy ...passed 00:19:26.934 Suite: bdevio tests on: Nvme1n1 00:19:26.934 Test: blockdev write read block ...passed 00:19:26.934 Test: blockdev write zeroes read block ...passed 00:19:26.934 Test: blockdev write zeroes read no split ...passed 00:19:26.934 Test: blockdev write zeroes read split ...passed 00:19:26.934 Test: blockdev write zeroes read split partial ...passed 00:19:26.934 Test: blockdev reset ...[2024-10-01 20:17:21.908532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:19:26.934 passed 00:19:26.934 Test: blockdev write read 8 blocks ...[2024-10-01 20:17:21.911251] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:26.934 passed 00:19:26.934 Test: blockdev write read size > 128k ...passed 00:19:26.934 Test: blockdev write read invalid size ...passed 00:19:26.934 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.934 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.934 Test: blockdev write read max offset ...passed 00:19:26.934 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.934 Test: blockdev writev readv 8 blocks ...passed 00:19:26.934 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.934 Test: blockdev writev readv block ...passed 00:19:26.934 Test: blockdev writev readv size > 128k ...passed 00:19:26.934 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.934 Test: blockdev comparev and writev ...[2024-10-01 20:17:21.917242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf830000 len:0x1000 00:19:26.934 [2024-10-01 20:17:21.917300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:26.934 passed 00:19:26.934 Test: blockdev nvme passthru rw ...passed 00:19:26.934 Test: blockdev nvme passthru vendor specific ...passed 00:19:26.934 Test: blockdev nvme admin passthru ...[2024-10-01 20:17:21.917812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:19:26.934 [2024-10-01 20:17:21.917843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:26.934 passed 00:19:26.934 Test: blockdev copy ...passed 00:19:26.934 Suite: bdevio tests on: Nvme0n1 00:19:26.934 Test: blockdev write read block ...passed 00:19:26.934 Test: blockdev write zeroes read block ...passed 00:19:26.934 Test: blockdev write zeroes read no split ...passed 00:19:26.934 Test: blockdev write zeroes read split ...passed 00:19:26.934 Test: blockdev write zeroes read split partial ...passed 00:19:26.934 Test: blockdev reset ...[2024-10-01 20:17:21.963510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:19:26.934 passed 00:19:26.934 Test: blockdev write read 8 blocks ...[2024-10-01 20:17:21.966271] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:26.934 passed 00:19:26.934 Test: blockdev write read size > 128k ...passed 00:19:26.934 Test: blockdev write read invalid size ...passed 00:19:26.934 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.934 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.934 Test: blockdev write read max offset ...passed 00:19:26.934 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.934 Test: blockdev writev readv 8 blocks ...passed 00:19:26.934 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.934 Test: blockdev writev readv block ...passed 00:19:26.934 Test: blockdev writev readv size > 128k ...passed 00:19:26.934 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.934 Test: blockdev comparev and writev ...passed 00:19:26.934 Test: blockdev nvme passthru rw ...[2024-10-01 20:17:21.971130] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:19:26.934 separate metadata which is not supported yet. 00:19:26.934 passed 00:19:26.934 Test: blockdev nvme passthru vendor specific ...passed 00:19:26.934 Test: blockdev nvme admin passthru ...[2024-10-01 20:17:21.971501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:19:26.934 [2024-10-01 20:17:21.971547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:19:26.934 passed 00:19:26.934 Test: blockdev copy ...passed 00:19:26.934 00:19:26.934 Run Summary: Type Total Ran Passed Failed Inactive 00:19:26.934 suites 6 6 n/a 0 0 00:19:26.934 tests 138 138 138 0 0 00:19:26.934 asserts 893 893 893 0 n/a 00:19:26.934 00:19:26.934 Elapsed time = 1.028 seconds 00:19:26.934 0 00:19:26.934 20:17:21 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60477 00:19:26.934 20:17:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 60477 ']' 00:19:26.934 20:17:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 60477 00:19:26.934 20:17:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:19:26.934 20:17:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.934 20:17:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60477 00:19:26.934 20:17:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:26.934 20:17:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:26.934 20:17:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60477' 00:19:26.934 killing process with pid 60477 00:19:26.934 20:17:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 60477 00:19:26.934 20:17:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 60477 00:19:28.832 20:17:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:28.832 00:19:28.832 real 0m3.599s 00:19:28.832 user 0m9.494s 00:19:28.832 sys 0m0.384s 00:19:28.832 20:17:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:28.833 20:17:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:28.833 ************************************ 00:19:28.833 END TEST bdev_bounds 00:19:28.833 ************************************ 00:19:28.833 20:17:23 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:19:28.833 20:17:23 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:28.833 20:17:23 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:28.833 20:17:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:28.833 ************************************ 00:19:28.833 START TEST bdev_nbd 00:19:28.833 ************************************ 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:28.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60548 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60548 /var/tmp/spdk-nbd.sock 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 60548 ']' 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:28.833 20:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:28.833 [2024-10-01 20:17:23.963031] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:19:28.833 [2024-10-01 20:17:23.963158] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.092 [2024-10-01 20:17:24.115291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.349 [2024-10-01 20:17:24.310600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.913 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.913 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:19:29.913 20:17:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:19:29.913 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.913 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:29.913 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:29.913 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:19:29.913 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.913 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:29.913 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:29.913 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:29.913 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:29.914 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:29.914 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:29.914 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.171 1+0 records in 00:19:30.171 1+0 records out 00:19:30.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406996 s, 10.1 MB/s 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:30.171 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.429 1+0 records in 00:19:30.429 1+0 records out 00:19:30.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504382 s, 8.1 MB/s 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:30.429 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.686 1+0 records in 00:19:30.686 1+0 records out 00:19:30.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633592 s, 6.5 MB/s 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:30.686 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:19:30.944 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:30.944 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:30.944 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:30.944 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:19:30.944 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:30.944 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:30.944 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:30.944 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:19:30.944 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:30.944 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:30.944 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:30.944 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.944 1+0 records in 00:19:30.944 1+0 records out 00:19:30.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432267 s, 9.5 MB/s 00:19:30.944 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.945 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:30.945 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.945 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:30.945 20:17:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:30.945 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:30.945 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:30.945 20:17:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:31.203 1+0 records in 00:19:31.203 1+0 records out 00:19:31.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397001 s, 10.3 MB/s 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:31.203 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:31.462 1+0 records in 00:19:31.462 1+0 records out 00:19:31.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503984 s, 8.1 MB/s 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:31.462 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:31.462 { 00:19:31.462 "nbd_device": "/dev/nbd0", 00:19:31.462 "bdev_name": "Nvme0n1" 00:19:31.462 }, 00:19:31.462 { 00:19:31.462 "nbd_device": "/dev/nbd1", 00:19:31.462 "bdev_name": "Nvme1n1" 00:19:31.462 }, 00:19:31.462 { 00:19:31.462 "nbd_device": "/dev/nbd2", 00:19:31.462 "bdev_name": "Nvme2n1" 00:19:31.462 }, 00:19:31.462 { 00:19:31.462 "nbd_device": "/dev/nbd3", 00:19:31.462 "bdev_name": "Nvme2n2" 00:19:31.463 }, 00:19:31.463 { 00:19:31.463 "nbd_device": "/dev/nbd4", 00:19:31.463 "bdev_name": "Nvme2n3" 00:19:31.463 }, 00:19:31.463 { 00:19:31.463 "nbd_device": "/dev/nbd5", 00:19:31.463 "bdev_name": "Nvme3n1" 00:19:31.463 } 00:19:31.463 ]' 00:19:31.463 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:31.463 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:31.463 { 00:19:31.463 "nbd_device": "/dev/nbd0", 00:19:31.463 "bdev_name": "Nvme0n1" 00:19:31.463 }, 00:19:31.463 { 00:19:31.463 "nbd_device": "/dev/nbd1", 00:19:31.463 "bdev_name": "Nvme1n1" 00:19:31.463 }, 00:19:31.463 { 00:19:31.463 "nbd_device": "/dev/nbd2", 00:19:31.463 "bdev_name": "Nvme2n1" 00:19:31.463 }, 00:19:31.463 { 00:19:31.463 "nbd_device": "/dev/nbd3", 00:19:31.463 "bdev_name": "Nvme2n2" 00:19:31.463 }, 00:19:31.463 { 00:19:31.463 "nbd_device": "/dev/nbd4", 00:19:31.463 "bdev_name": "Nvme2n3" 00:19:31.463 }, 00:19:31.463 { 00:19:31.463 "nbd_device": "/dev/nbd5", 00:19:31.463 "bdev_name": "Nvme3n1" 00:19:31.463 } 00:19:31.463 ]' 00:19:31.463 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:31.721 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:31.721 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:31.721 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:31.721 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:31.721 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:31.721 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.721 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:31.978 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:31.978 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:31.978 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:31.978 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.978 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.978 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:31.978 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:31.978 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.978 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.978 20:17:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:31.978 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:31.978 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:31.978 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:31.978 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.978 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.978 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:31.978 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:31.978 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.978 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.978 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:32.235 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:32.235 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:32.235 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:32.235 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.235 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.235 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:32.235 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:32.235 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.235 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:32.235 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:32.492 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:32.492 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:32.492 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:32.492 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.492 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.492 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:32.492 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:32.492 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.492 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:32.492 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:32.750 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:32.750 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:32.750 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:32.750 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.750 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.750 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:32.750 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:32.750 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.750 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:32.750 20:17:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:33.008 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:33.008 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:33.008 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:33.008 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:33.008 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:33.008 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:33.008 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:33.008 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:33.008 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:33.008 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:33.008 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:33.265 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:33.265 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:33.265 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:33.265 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:33.265 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:33.266 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:19:33.524 /dev/nbd0 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.524 1+0 records in 00:19:33.524 1+0 records out 00:19:33.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336296 s, 12.2 MB/s 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:33.524 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:19:33.790 /dev/nbd1 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.790 1+0 records in 00:19:33.790 1+0 records out 00:19:33.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275771 s, 14.9 MB/s 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:33.790 20:17:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:19:34.087 /dev/nbd10 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.087 1+0 records in 00:19:34.087 1+0 records out 00:19:34.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368789 s, 11.1 MB/s 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:19:34.087 /dev/nbd11 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.087 1+0 records in 00:19:34.087 1+0 records out 00:19:34.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409692 s, 10.0 MB/s 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:34.087 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:19:34.344 /dev/nbd12 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.603 1+0 records in 00:19:34.603 1+0 records out 00:19:34.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493458 s, 8.3 MB/s 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:19:34.603 /dev/nbd13 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:34.603 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.860 1+0 records in 00:19:34.860 1+0 records out 00:19:34.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623553 s, 6.6 MB/s 00:19:34.860 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.860 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:34.860 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.860 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:34.860 20:17:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:34.860 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.860 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:34.860 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:34.860 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:34.860 20:17:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:34.860 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:34.860 { 00:19:34.860 "nbd_device": "/dev/nbd0", 00:19:34.860 "bdev_name": "Nvme0n1" 00:19:34.860 }, 00:19:34.860 { 00:19:34.860 "nbd_device": "/dev/nbd1", 00:19:34.860 "bdev_name": "Nvme1n1" 00:19:34.860 }, 00:19:34.860 { 00:19:34.860 "nbd_device": "/dev/nbd10", 00:19:34.860 "bdev_name": "Nvme2n1" 00:19:34.860 }, 00:19:34.860 { 00:19:34.860 "nbd_device": "/dev/nbd11", 00:19:34.860 "bdev_name": "Nvme2n2" 00:19:34.860 }, 00:19:34.860 { 00:19:34.860 "nbd_device": "/dev/nbd12", 00:19:34.860 "bdev_name": "Nvme2n3" 00:19:34.860 }, 00:19:34.860 { 00:19:34.860 "nbd_device": "/dev/nbd13", 00:19:34.860 "bdev_name": "Nvme3n1" 00:19:34.860 } 00:19:34.860 ]' 00:19:34.860 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:34.860 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:34.860 { 00:19:34.860 "nbd_device": "/dev/nbd0", 00:19:34.860 "bdev_name": "Nvme0n1" 00:19:34.860 }, 00:19:34.860 { 00:19:34.860 "nbd_device": "/dev/nbd1", 00:19:34.860 "bdev_name": "Nvme1n1" 00:19:34.860 }, 00:19:34.860 { 00:19:34.860 "nbd_device": "/dev/nbd10", 00:19:34.860 "bdev_name": "Nvme2n1" 00:19:34.860 }, 00:19:34.860 { 00:19:34.860 "nbd_device": "/dev/nbd11", 00:19:34.860 "bdev_name": "Nvme2n2" 00:19:34.860 }, 00:19:34.860 { 00:19:34.860 "nbd_device": "/dev/nbd12", 00:19:34.860 "bdev_name": "Nvme2n3" 00:19:34.860 }, 00:19:34.860 { 00:19:34.860 "nbd_device": "/dev/nbd13", 00:19:34.860 "bdev_name": "Nvme3n1" 00:19:34.860 } 00:19:34.860 ]' 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:34.861 /dev/nbd1 00:19:34.861 /dev/nbd10 00:19:34.861 /dev/nbd11 00:19:34.861 /dev/nbd12 00:19:34.861 /dev/nbd13' 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:34.861 /dev/nbd1 00:19:34.861 /dev/nbd10 00:19:34.861 /dev/nbd11 00:19:34.861 /dev/nbd12 00:19:34.861 /dev/nbd13' 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:34.861 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:35.118 256+0 records in 00:19:35.118 256+0 records out 00:19:35.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00990376 s, 106 MB/s 00:19:35.118 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.118 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:35.118 256+0 records in 00:19:35.118 256+0 records out 00:19:35.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0638965 s, 16.4 MB/s 00:19:35.118 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.118 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:35.118 256+0 records in 00:19:35.118 256+0 records out 00:19:35.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0685108 s, 15.3 MB/s 00:19:35.118 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.118 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:35.118 256+0 records in 00:19:35.118 256+0 records out 00:19:35.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0742326 s, 14.1 MB/s 00:19:35.118 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.118 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:35.376 256+0 records in 00:19:35.376 256+0 records out 00:19:35.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0678553 s, 15.5 MB/s 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:35.376 256+0 records in 00:19:35.376 256+0 records out 00:19:35.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0671452 s, 15.6 MB/s 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:35.376 256+0 records in 00:19:35.376 256+0 records out 00:19:35.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0677989 s, 15.5 MB/s 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:35.376 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:35.377 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:35.634 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:35.634 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:35.634 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:35.634 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:35.634 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:35.634 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:35.634 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:35.634 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:35.634 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:35.634 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:35.891 20:17:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:35.891 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:35.891 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:35.891 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:35.891 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:35.891 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:35.891 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:35.891 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:35.891 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:35.891 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:36.148 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:36.148 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:36.148 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:36.148 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.148 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.148 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:36.148 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:36.148 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.148 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.148 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:36.405 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:36.405 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:36.405 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:36.405 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.405 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.405 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:36.405 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:36.405 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.405 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.405 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:36.663 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:36.663 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:36.663 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:36.663 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.663 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.663 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:36.663 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:36.663 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.663 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.663 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:36.921 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:36.921 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:36.921 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:36.921 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.921 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.921 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:36.921 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:36.921 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.921 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:36.921 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:36.921 20:17:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:37.194 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:37.194 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:37.194 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:37.194 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:37.194 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:37.194 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:37.194 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:37.194 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:37.194 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:37.194 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:37.194 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:37.194 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:37.194 20:17:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:37.195 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:37.195 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:37.195 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:37.453 malloc_lvol_verify 00:19:37.453 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:37.712 15e55e01-ec1b-4909-9ff5-81b3d08147c1 00:19:37.712 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:37.712 dc84d810-0743-4b76-99c3-afeee55b5cf8 00:19:37.712 20:17:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:38.064 /dev/nbd0 00:19:38.064 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:38.064 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:38.064 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:38.064 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:38.064 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:38.064 mke2fs 1.47.0 (5-Feb-2023) 00:19:38.064 Discarding device blocks: 0/4096 done 00:19:38.064 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:38.064 00:19:38.064 Allocating group tables: 0/1 done 00:19:38.064 Writing inode tables: 0/1 done 00:19:38.064 Creating journal (1024 blocks): done 00:19:38.064 Writing superblocks and filesystem accounting information: 0/1 done 00:19:38.064 00:19:38.064 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:38.064 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:38.064 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:38.064 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:38.064 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:38.064 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:38.064 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60548 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 60548 ']' 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 60548 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60548 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:38.322 killing process with pid 60548 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60548' 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 60548 00:19:38.322 20:17:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 60548 00:19:39.696 20:17:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:39.696 00:19:39.696 real 0m10.792s 00:19:39.696 user 0m15.226s 00:19:39.696 sys 0m3.075s 00:19:39.696 20:17:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:39.696 20:17:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:39.696 ************************************ 00:19:39.696 END TEST bdev_nbd 00:19:39.696 ************************************ 00:19:39.696 20:17:34 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:39.696 20:17:34 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:19:39.696 skipping fio tests on NVMe due to multi-ns failures. 00:19:39.696 20:17:34 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:19:39.696 20:17:34 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:39.696 20:17:34 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:39.696 20:17:34 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:39.696 20:17:34 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:39.696 20:17:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:39.696 ************************************ 00:19:39.696 START TEST bdev_verify 00:19:39.696 ************************************ 00:19:39.696 20:17:34 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:39.696 [2024-10-01 20:17:34.796725] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:19:39.696 [2024-10-01 20:17:34.796859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60927 ] 00:19:39.954 [2024-10-01 20:17:34.947618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:39.954 [2024-10-01 20:17:35.151097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.954 [2024-10-01 20:17:35.151104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.888 Running I/O for 5 seconds... 00:19:45.993 22272.00 IOPS, 87.00 MiB/s 23424.00 IOPS, 91.50 MiB/s 23317.33 IOPS, 91.08 MiB/s 23648.00 IOPS, 92.38 MiB/s 23564.80 IOPS, 92.05 MiB/s 00:19:45.993 Latency(us) 00:19:45.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.993 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:45.993 Verification LBA range: start 0x0 length 0xbd0bd 00:19:45.993 Nvme0n1 : 5.06 1949.23 7.61 0.00 0.00 65458.46 12703.90 72190.42 00:19:45.993 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:45.993 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:45.993 Nvme0n1 : 5.04 1929.87 7.54 0.00 0.00 66141.34 11544.42 71787.13 00:19:45.993 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:45.993 Verification LBA range: start 0x0 length 0xa0000 00:19:45.993 Nvme1n1 : 5.06 1948.02 7.61 0.00 0.00 65340.17 14619.57 65334.35 00:19:45.993 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:45.993 Verification LBA range: start 0xa0000 length 0xa0000 00:19:45.993 Nvme1n1 : 5.04 1929.33 7.54 0.00 0.00 66042.96 14216.27 63317.86 00:19:45.993 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:45.993 Verification LBA range: start 0x0 length 0x80000 00:19:45.993 Nvme2n1 : 5.06 1947.48 7.61 0.00 0.00 65220.03 14115.45 60898.07 00:19:45.993 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:45.993 Verification LBA range: start 0x80000 length 0x80000 00:19:45.993 Nvme2n1 : 5.04 1928.81 7.53 0.00 0.00 65921.11 13611.32 59284.87 00:19:45.993 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:45.993 Verification LBA range: start 0x0 length 0x80000 00:19:45.993 Nvme2n2 : 5.06 1946.95 7.61 0.00 0.00 65098.06 13510.50 62107.96 00:19:45.993 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:45.993 Verification LBA range: start 0x80000 length 0x80000 00:19:45.993 Nvme2n2 : 5.06 1934.12 7.56 0.00 0.00 65560.31 5167.26 58074.98 00:19:45.993 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:45.993 Verification LBA range: start 0x0 length 0x80000 00:19:45.993 Nvme2n3 : 5.08 1953.56 7.63 0.00 0.00 64746.78 4965.61 65334.35 00:19:45.993 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:45.993 Verification LBA range: start 0x80000 length 0x80000 00:19:45.993 Nvme2n3 : 5.07 1942.12 7.59 0.00 0.00 65255.03 8318.03 62914.56 00:19:45.993 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:45.993 Verification LBA range: start 0x0 length 0x20000 00:19:45.993 Nvme3n1 : 5.09 1961.36 7.66 0.00 0.00 64438.46 7561.85 67754.14 00:19:45.993 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:45.993 Verification LBA range: start 0x20000 length 0x20000 00:19:45.993 Nvme3n1 : 5.08 1941.59 7.58 0.00 0.00 65138.95 8670.92 66947.54 00:19:45.993 =================================================================================================================== 00:19:45.993 Total : 23312.44 91.06 0.00 0.00 65359.84 4965.61 72190.42 00:19:48.518 00:19:48.518 real 0m8.577s 00:19:48.518 user 0m15.924s 00:19:48.518 sys 0m0.304s 00:19:48.518 20:17:43 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:48.518 20:17:43 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:48.518 ************************************ 00:19:48.518 END TEST bdev_verify 00:19:48.518 ************************************ 00:19:48.518 20:17:43 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:48.518 20:17:43 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:48.518 20:17:43 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:48.518 20:17:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:48.518 ************************************ 00:19:48.518 START TEST bdev_verify_big_io 00:19:48.518 ************************************ 00:19:48.518 20:17:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:48.518 [2024-10-01 20:17:43.415036] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:19:48.518 [2024-10-01 20:17:43.415158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61046 ] 00:19:48.518 [2024-10-01 20:17:43.561593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:48.518 [2024-10-01 20:17:43.720900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.518 [2024-10-01 20:17:43.720990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.452 Running I/O for 5 seconds... 00:19:55.383 1079.00 IOPS, 67.44 MiB/s 1698.00 IOPS, 106.12 MiB/s 1463.00 IOPS, 91.44 MiB/s 1927.75 IOPS, 120.48 MiB/s 00:19:55.383 Latency(us) 00:19:55.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.383 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:55.383 Verification LBA range: start 0x0 length 0xbd0b 00:19:55.383 Nvme0n1 : 5.66 130.20 8.14 0.00 0.00 949003.89 21878.94 935652.43 00:19:55.383 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:55.383 Verification LBA range: start 0xbd0b length 0xbd0b 00:19:55.383 Nvme0n1 : 5.54 132.77 8.30 0.00 0.00 928850.33 10989.88 1019538.51 00:19:55.383 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:55.383 Verification LBA range: start 0x0 length 0xa000 00:19:55.383 Nvme1n1 : 5.66 131.23 8.20 0.00 0.00 918495.01 38918.30 884030.23 00:19:55.383 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:55.383 Verification LBA range: start 0xa000 length 0xa000 00:19:55.383 Nvme1n1 : 5.66 126.15 7.88 0.00 0.00 947243.59 98001.53 1626099.40 00:19:55.383 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:55.383 Verification LBA range: start 0x0 length 0x8000 00:19:55.383 Nvme2n1 : 5.66 131.03 8.19 0.00 0.00 894253.19 38918.30 909841.33 00:19:55.383 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:55.383 Verification LBA range: start 0x8000 length 0x8000 00:19:55.384 Nvme2n1 : 5.74 130.28 8.14 0.00 0.00 891618.22 77836.60 1664816.05 00:19:55.384 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:55.384 Verification LBA range: start 0x0 length 0x8000 00:19:55.384 Nvme2n2 : 5.66 135.57 8.47 0.00 0.00 848692.78 74610.22 929199.66 00:19:55.384 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:55.384 Verification LBA range: start 0x8000 length 0x8000 00:19:55.384 Nvme2n2 : 5.78 135.55 8.47 0.00 0.00 834746.02 39523.25 1690627.15 00:19:55.384 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:55.384 Verification LBA range: start 0x0 length 0x8000 00:19:55.384 Nvme2n3 : 5.77 150.51 9.41 0.00 0.00 749913.16 7612.26 948557.98 00:19:55.384 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:55.384 Verification LBA range: start 0x8000 length 0x8000 00:19:55.384 Nvme2n3 : 5.82 145.43 9.09 0.00 0.00 755560.98 17644.31 1742249.35 00:19:55.384 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:55.384 Verification LBA range: start 0x0 length 0x2000 00:19:55.384 Nvme3n1 : 5.77 155.31 9.71 0.00 0.00 706510.28 1764.43 967916.31 00:19:55.384 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:55.384 Verification LBA range: start 0x2000 length 0x2000 00:19:55.384 Nvme3n1 : 5.91 199.40 12.46 0.00 0.00 539189.54 409.60 1290555.08 00:19:55.384 =================================================================================================================== 00:19:55.384 Total : 1703.41 106.46 0.00 0.00 813168.32 409.60 1742249.35 00:19:58.667 00:19:58.667 real 0m9.943s 00:19:58.667 user 0m18.699s 00:19:58.667 sys 0m0.318s 00:19:58.667 20:17:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:58.667 20:17:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.667 ************************************ 00:19:58.667 END TEST bdev_verify_big_io 00:19:58.667 ************************************ 00:19:58.667 20:17:53 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:58.667 20:17:53 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:58.667 20:17:53 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:58.667 20:17:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:58.667 ************************************ 00:19:58.667 START TEST bdev_write_zeroes 00:19:58.667 ************************************ 00:19:58.667 20:17:53 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:58.667 [2024-10-01 20:17:53.397505] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:19:58.667 [2024-10-01 20:17:53.397633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61167 ] 00:19:58.667 [2024-10-01 20:17:53.540007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.667 [2024-10-01 20:17:53.727748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.620 Running I/O for 1 seconds... 00:20:00.576 64896.00 IOPS, 253.50 MiB/s 00:20:00.576 Latency(us) 00:20:00.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.576 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.576 Nvme0n1 : 1.02 10777.64 42.10 0.00 0.00 11848.55 7309.78 23492.14 00:20:00.576 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.576 Nvme1n1 : 1.02 10764.81 42.05 0.00 0.00 11848.22 8469.27 22988.01 00:20:00.576 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.576 Nvme2n1 : 1.02 10752.25 42.00 0.00 0.00 11836.51 7662.67 22282.24 00:20:00.576 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.576 Nvme2n2 : 1.02 10739.67 41.95 0.00 0.00 11829.43 7612.26 21778.12 00:20:00.576 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.576 Nvme2n3 : 1.03 10727.16 41.90 0.00 0.00 11825.03 7158.55 21677.29 00:20:00.576 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.576 Nvme3n1 : 1.03 10714.40 41.85 0.00 0.00 11820.60 7057.72 23693.78 00:20:00.576 =================================================================================================================== 00:20:00.576 Total : 64475.93 251.86 0.00 0.00 11834.72 7057.72 23693.78 00:20:01.950 00:20:01.950 real 0m3.475s 00:20:01.950 user 0m3.081s 00:20:01.950 sys 0m0.262s 00:20:01.950 20:17:56 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:01.950 20:17:56 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:01.950 ************************************ 00:20:01.950 END TEST bdev_write_zeroes 00:20:01.950 ************************************ 00:20:01.950 20:17:56 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:01.950 20:17:56 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:01.950 20:17:56 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:01.950 20:17:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:01.950 ************************************ 00:20:01.950 START TEST bdev_json_nonenclosed 00:20:01.950 ************************************ 00:20:01.950 20:17:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:01.950 [2024-10-01 20:17:56.913137] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:20:01.950 [2024-10-01 20:17:56.913266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61220 ] 00:20:01.950 [2024-10-01 20:17:57.062815] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.207 [2024-10-01 20:17:57.251547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.207 [2024-10-01 20:17:57.251636] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:02.207 [2024-10-01 20:17:57.251653] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:02.207 [2024-10-01 20:17:57.251662] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:02.470 00:20:02.470 real 0m0.697s 00:20:02.470 user 0m0.490s 00:20:02.470 sys 0m0.101s 00:20:02.470 20:17:57 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:02.470 20:17:57 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:02.470 ************************************ 00:20:02.470 END TEST bdev_json_nonenclosed 00:20:02.470 ************************************ 00:20:02.470 20:17:57 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:02.470 20:17:57 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:02.470 20:17:57 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:02.470 20:17:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:02.470 ************************************ 00:20:02.470 START TEST bdev_json_nonarray 00:20:02.470 ************************************ 00:20:02.470 20:17:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:02.470 [2024-10-01 20:17:57.647271] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:20:02.470 [2024-10-01 20:17:57.647396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61251 ] 00:20:02.732 [2024-10-01 20:17:57.798725] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.990 [2024-10-01 20:17:57.987426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.990 [2024-10-01 20:17:57.987520] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:02.990 [2024-10-01 20:17:57.987537] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:02.990 [2024-10-01 20:17:57.987546] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:03.251 00:20:03.251 real 0m0.692s 00:20:03.251 user 0m0.493s 00:20:03.251 sys 0m0.094s 00:20:03.251 20:17:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:03.251 20:17:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:03.251 ************************************ 00:20:03.251 END TEST bdev_json_nonarray 00:20:03.251 ************************************ 00:20:03.251 20:17:58 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:20:03.251 20:17:58 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:20:03.251 20:17:58 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:20:03.251 20:17:58 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:03.251 20:17:58 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:20:03.251 20:17:58 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:03.251 20:17:58 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:03.251 20:17:58 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:20:03.251 20:17:58 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:20:03.251 20:17:58 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:20:03.251 20:17:58 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:20:03.251 00:20:03.251 real 0m44.517s 00:20:03.251 user 1m9.465s 00:20:03.251 sys 0m5.572s 00:20:03.251 20:17:58 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:03.251 ************************************ 00:20:03.251 END TEST blockdev_nvme 00:20:03.251 20:17:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:03.251 ************************************ 00:20:03.251 20:17:58 -- spdk/autotest.sh@209 -- # uname -s 00:20:03.251 20:17:58 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:20:03.251 20:17:58 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:20:03.251 20:17:58 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:03.251 20:17:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:03.251 20:17:58 -- common/autotest_common.sh@10 -- # set +x 00:20:03.251 ************************************ 00:20:03.251 START TEST blockdev_nvme_gpt 00:20:03.251 ************************************ 00:20:03.251 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:20:03.251 * Looking for test storage... 00:20:03.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:03.251 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:03.251 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:20:03.251 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:03.510 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:03.510 20:17:58 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:20:03.510 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:03.510 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:03.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.510 --rc genhtml_branch_coverage=1 00:20:03.510 --rc genhtml_function_coverage=1 00:20:03.510 --rc genhtml_legend=1 00:20:03.510 --rc geninfo_all_blocks=1 00:20:03.510 --rc geninfo_unexecuted_blocks=1 00:20:03.510 00:20:03.510 ' 00:20:03.510 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:03.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.510 --rc genhtml_branch_coverage=1 00:20:03.510 --rc genhtml_function_coverage=1 00:20:03.510 --rc genhtml_legend=1 00:20:03.510 --rc geninfo_all_blocks=1 00:20:03.510 --rc geninfo_unexecuted_blocks=1 00:20:03.510 00:20:03.510 ' 00:20:03.510 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:03.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.510 --rc genhtml_branch_coverage=1 00:20:03.510 --rc genhtml_function_coverage=1 00:20:03.510 --rc genhtml_legend=1 00:20:03.510 --rc geninfo_all_blocks=1 00:20:03.510 --rc geninfo_unexecuted_blocks=1 00:20:03.510 00:20:03.510 ' 00:20:03.510 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:03.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.510 --rc genhtml_branch_coverage=1 00:20:03.510 --rc genhtml_function_coverage=1 00:20:03.510 --rc genhtml_legend=1 00:20:03.510 --rc geninfo_all_blocks=1 00:20:03.510 --rc geninfo_unexecuted_blocks=1 00:20:03.510 00:20:03.510 ' 00:20:03.510 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:03.510 20:17:58 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:20:03.510 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:03.510 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:03.510 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:03.510 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61335 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61335 00:20:03.511 20:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:03.511 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 61335 ']' 00:20:03.511 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.511 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.511 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.511 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.511 20:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:03.511 [2024-10-01 20:17:58.599826] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:20:03.511 [2024-10-01 20:17:58.599958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61335 ] 00:20:03.770 [2024-10-01 20:17:58.750755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.770 [2024-10-01 20:17:58.945827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.701 20:17:59 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.701 20:17:59 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:20:04.701 20:17:59 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:04.701 20:17:59 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:20:04.701 20:17:59 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:04.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:05.224 Waiting for block devices as requested 00:20:05.224 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:05.224 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:05.224 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:05.224 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:10.506 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:20:10.506 BYT; 00:20:10.506 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:20:10.506 BYT; 00:20:10.506 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:20:10.506 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:20:11.072 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:20:11.072 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:20:11.072 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:20:11.072 20:18:05 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:20:11.072 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:20:11.072 20:18:05 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:20:12.970 The operation has completed successfully. 00:20:12.970 20:18:08 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:20:14.866 The operation has completed successfully. 00:20:14.866 20:18:09 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:15.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:15.689 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:15.689 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:15.689 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:15.689 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:15.689 20:18:10 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:20:15.689 20:18:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.689 20:18:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:15.689 [] 00:20:15.689 20:18:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.689 20:18:10 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:20:15.689 20:18:10 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:20:15.689 20:18:10 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:20:15.689 20:18:10 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:15.689 20:18:10 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:20:15.689 20:18:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.689 20:18:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:15.947 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.947 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:15.947 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.947 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:15.947 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.947 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:20:15.947 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:15.947 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.947 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:15.947 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.947 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:15.947 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.947 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:15.947 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.947 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:15.947 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.947 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:15.947 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.207 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:16.207 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:16.208 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:16.208 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.208 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:16.208 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.208 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:16.208 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:16.208 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "1ded34a5-caa2-4031-ae4f-7996995ca4d3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1ded34a5-caa2-4031-ae4f-7996995ca4d3",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ff98f1e7-c401-4c1c-8d67-77b1f0eec839"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ff98f1e7-c401-4c1c-8d67-77b1f0eec839",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "454973b3-3083-4890-bbe6-cd41edf81fc5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "454973b3-3083-4890-bbe6-cd41edf81fc5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f2941e4e-e929-4c28-b62c-bc8534365df5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f2941e4e-e929-4c28-b62c-bc8534365df5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "53aedd79-f37b-4d95-a774-79eb68ca089f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "53aedd79-f37b-4d95-a774-79eb68ca089f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:20:16.208 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:16.208 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:20:16.209 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:16.209 20:18:11 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61335 00:20:16.209 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 61335 ']' 00:20:16.209 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 61335 00:20:16.209 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:20:16.209 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:16.209 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61335 00:20:16.209 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:16.209 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:16.209 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61335' 00:20:16.209 killing process with pid 61335 00:20:16.209 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 61335 00:20:16.209 20:18:11 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 61335 00:20:18.106 20:18:12 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:18.106 20:18:12 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:20:18.106 20:18:12 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:18.107 20:18:12 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:18.107 20:18:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:18.107 ************************************ 00:20:18.107 START TEST bdev_hello_world 00:20:18.107 ************************************ 00:20:18.107 20:18:12 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:20:18.107 [2024-10-01 20:18:12.992968] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:20:18.107 [2024-10-01 20:18:12.993099] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61981 ] 00:20:18.107 [2024-10-01 20:18:13.138818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.364 [2024-10-01 20:18:13.369784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.929 [2024-10-01 20:18:14.025511] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:18.929 [2024-10-01 20:18:14.025560] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:20:18.929 [2024-10-01 20:18:14.025579] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:18.929 [2024-10-01 20:18:14.027676] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:18.929 [2024-10-01 20:18:14.028141] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:18.929 [2024-10-01 20:18:14.028165] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:18.929 [2024-10-01 20:18:14.028325] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:18.929 00:20:18.929 [2024-10-01 20:18:14.028355] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:19.878 00:20:19.878 real 0m2.094s 00:20:19.878 user 0m1.744s 00:20:19.878 sys 0m0.234s 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:19.878 ************************************ 00:20:19.878 END TEST bdev_hello_world 00:20:19.878 ************************************ 00:20:19.878 20:18:15 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:19.878 20:18:15 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:19.878 20:18:15 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:19.878 20:18:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:19.878 ************************************ 00:20:19.878 START TEST bdev_bounds 00:20:19.878 ************************************ 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:20:19.878 Process bdevio pid: 62024 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62024 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62024' 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62024 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 62024 ']' 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:19.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:19.878 20:18:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:20.135 [2024-10-01 20:18:15.129552] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:20:20.135 [2024-10-01 20:18:15.129675] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62024 ] 00:20:20.135 [2024-10-01 20:18:15.279600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:20.393 [2024-10-01 20:18:15.472240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.393 [2024-10-01 20:18:15.472423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.393 [2024-10-01 20:18:15.472840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.326 20:18:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:21.326 20:18:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:20:21.326 20:18:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:21.326 I/O targets: 00:20:21.326 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:20:21.326 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:20:21.326 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:20:21.326 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:21.326 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:21.326 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:21.327 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:20:21.327 00:20:21.327 00:20:21.327 CUnit - A unit testing framework for C - Version 2.1-3 00:20:21.327 http://cunit.sourceforge.net/ 00:20:21.327 00:20:21.327 00:20:21.327 Suite: bdevio tests on: Nvme3n1 00:20:21.327 Test: blockdev write read block ...passed 00:20:21.327 Test: blockdev write zeroes read block ...passed 00:20:21.327 Test: blockdev write zeroes read no split ...passed 00:20:21.327 Test: blockdev write zeroes read split ...passed 00:20:21.327 Test: blockdev write zeroes read split partial ...passed 00:20:21.327 Test: blockdev reset ...[2024-10-01 20:18:16.390150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:20:21.327 passed 00:20:21.327 Test: blockdev write read 8 blocks ...[2024-10-01 20:18:16.393403] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:21.327 passed 00:20:21.327 Test: blockdev write read size > 128k ...passed 00:20:21.327 Test: blockdev write read invalid size ...passed 00:20:21.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:21.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:21.327 Test: blockdev write read max offset ...passed 00:20:21.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:21.327 Test: blockdev writev readv 8 blocks ...passed 00:20:21.327 Test: blockdev writev readv 30 x 1block ...passed 00:20:21.327 Test: blockdev writev readv block ...passed 00:20:21.327 Test: blockdev writev readv size > 128k ...passed 00:20:21.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:21.327 Test: blockdev comparev and writev ...[2024-10-01 20:18:16.399177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x293606000 len:0x1000 00:20:21.327 [2024-10-01 20:18:16.399229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:21.327 passed 00:20:21.327 Test: blockdev nvme passthru rw ...passed 00:20:21.327 Test: blockdev nvme passthru vendor specific ...passed 00:20:21.327 Test: blockdev nvme admin passthru ...[2024-10-01 20:18:16.399780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:21.327 [2024-10-01 20:18:16.399807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:20:21.327 passed 00:20:21.327 Test: blockdev copy ...passed 00:20:21.327 Suite: bdevio tests on: Nvme2n3 00:20:21.327 Test: blockdev write read block ...passed 00:20:21.327 Test: blockdev write zeroes read block ...passed 00:20:21.327 Test: blockdev write zeroes read no split ...passed 00:20:21.327 Test: blockdev write zeroes read split ...passed 00:20:21.327 Test: blockdev write zeroes read split partial ...passed 00:20:21.327 Test: blockdev reset ...[2024-10-01 20:18:16.449031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:20:21.327 passed 00:20:21.327 Test: blockdev write read 8 blocks ...[2024-10-01 20:18:16.452276] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:21.327 passed 00:20:21.327 Test: blockdev write read size > 128k ...passed 00:20:21.327 Test: blockdev write read invalid size ...passed 00:20:21.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:21.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:21.327 Test: blockdev write read max offset ...passed 00:20:21.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:21.327 Test: blockdev writev readv 8 blocks ...passed 00:20:21.327 Test: blockdev writev readv 30 x 1block ...passed 00:20:21.327 Test: blockdev writev readv block ...passed 00:20:21.327 Test: blockdev writev readv size > 128k ...passed 00:20:21.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:21.327 Test: blockdev comparev and writev ...[2024-10-01 20:18:16.458148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d163c000 len:0x1000 00:20:21.327 [2024-10-01 20:18:16.458202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:21.327 passed 00:20:21.327 Test: blockdev nvme passthru rw ...passed 00:20:21.327 Test: blockdev nvme passthru vendor specific ...passed 00:20:21.327 Test: blockdev nvme admin passthru ...[2024-10-01 20:18:16.458790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:21.327 [2024-10-01 20:18:16.458816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:20:21.327 passed 00:20:21.327 Test: blockdev copy ...passed 00:20:21.327 Suite: bdevio tests on: Nvme2n2 00:20:21.327 Test: blockdev write read block ...passed 00:20:21.327 Test: blockdev write zeroes read block ...passed 00:20:21.327 Test: blockdev write zeroes read no split ...passed 00:20:21.327 Test: blockdev write zeroes read split ...passed 00:20:21.327 Test: blockdev write zeroes read split partial ...passed 00:20:21.327 Test: blockdev reset ...[2024-10-01 20:18:16.507900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:20:21.327 passed 00:20:21.327 Test: blockdev write read 8 blocks ...[2024-10-01 20:18:16.511070] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:21.327 passed 00:20:21.327 Test: blockdev write read size > 128k ...passed 00:20:21.327 Test: blockdev write read invalid size ...passed 00:20:21.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:21.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:21.327 Test: blockdev write read max offset ...passed 00:20:21.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:21.327 Test: blockdev writev readv 8 blocks ...passed 00:20:21.327 Test: blockdev writev readv 30 x 1block ...passed 00:20:21.327 Test: blockdev writev readv block ...passed 00:20:21.327 Test: blockdev writev readv size > 128k ...passed 00:20:21.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:21.327 Test: blockdev comparev and writev ...[2024-10-01 20:18:16.516678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1636000 len:0x1000 00:20:21.327 [2024-10-01 20:18:16.516736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:21.327 passed 00:20:21.327 Test: blockdev nvme passthru rw ...passed 00:20:21.327 Test: blockdev nvme passthru vendor specific ...passed 00:20:21.327 Test: blockdev nvme admin passthru ...[2024-10-01 20:18:16.517361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:21.327 [2024-10-01 20:18:16.517388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:20:21.327 passed 00:20:21.327 Test: blockdev copy ...passed 00:20:21.327 Suite: bdevio tests on: Nvme2n1 00:20:21.327 Test: blockdev write read block ...passed 00:20:21.328 Test: blockdev write zeroes read block ...passed 00:20:21.328 Test: blockdev write zeroes read no split ...passed 00:20:21.586 Test: blockdev write zeroes read split ...passed 00:20:21.586 Test: blockdev write zeroes read split partial ...passed 00:20:21.586 Test: blockdev reset ...[2024-10-01 20:18:16.564400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:20:21.586 passed 00:20:21.586 Test: blockdev write read 8 blocks ...[2024-10-01 20:18:16.567943] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:21.586 passed 00:20:21.586 Test: blockdev write read size > 128k ...passed 00:20:21.586 Test: blockdev write read invalid size ...passed 00:20:21.586 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:21.586 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:21.586 Test: blockdev write read max offset ...passed 00:20:21.586 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:21.586 Test: blockdev writev readv 8 blocks ...passed 00:20:21.586 Test: blockdev writev readv 30 x 1block ...passed 00:20:21.586 Test: blockdev writev readv block ...passed 00:20:21.586 Test: blockdev writev readv size > 128k ...passed 00:20:21.586 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:21.586 Test: blockdev comparev and writev ...[2024-10-01 20:18:16.573670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1632000 len:0x1000 00:20:21.586 [2024-10-01 20:18:16.573729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:21.586 passed 00:20:21.586 Test: blockdev nvme passthru rw ...passed 00:20:21.586 Test: blockdev nvme passthru vendor specific ...passed 00:20:21.586 Test: blockdev nvme admin passthru ...[2024-10-01 20:18:16.574393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:21.586 [2024-10-01 20:18:16.574418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:20:21.586 passed 00:20:21.586 Test: blockdev copy ...passed 00:20:21.586 Suite: bdevio tests on: Nvme1n1p2 00:20:21.586 Test: blockdev write read block ...passed 00:20:21.586 Test: blockdev write zeroes read block ...passed 00:20:21.586 Test: blockdev write zeroes read no split ...passed 00:20:21.586 Test: blockdev write zeroes read split ...passed 00:20:21.586 Test: blockdev write zeroes read split partial ...passed 00:20:21.586 Test: blockdev reset ...[2024-10-01 20:18:16.621570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:20:21.586 passed 00:20:21.586 Test: blockdev write read 8 blocks ...[2024-10-01 20:18:16.624412] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:21.586 passed 00:20:21.586 Test: blockdev write read size > 128k ...passed 00:20:21.586 Test: blockdev write read invalid size ...passed 00:20:21.586 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:21.586 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:21.586 Test: blockdev write read max offset ...passed 00:20:21.586 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:21.586 Test: blockdev writev readv 8 blocks ...passed 00:20:21.586 Test: blockdev writev readv 30 x 1block ...passed 00:20:21.586 Test: blockdev writev readv block ...passed 00:20:21.586 Test: blockdev writev readv size > 128k ...passed 00:20:21.586 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:21.586 Test: blockdev comparev and writev ...[2024-10-01 20:18:16.630633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d162e000 len:0x1000 00:20:21.586 [2024-10-01 20:18:16.630677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:21.586 passed 00:20:21.586 Test: blockdev nvme passthru rw ...passed 00:20:21.586 Test: blockdev nvme passthru vendor specific ...passed 00:20:21.586 Test: blockdev nvme admin passthru ...passed 00:20:21.586 Test: blockdev copy ...passed 00:20:21.586 Suite: bdevio tests on: Nvme1n1p1 00:20:21.586 Test: blockdev write read block ...passed 00:20:21.586 Test: blockdev write zeroes read block ...passed 00:20:21.586 Test: blockdev write zeroes read no split ...passed 00:20:21.586 Test: blockdev write zeroes read split ...passed 00:20:21.586 Test: blockdev write zeroes read split partial ...passed 00:20:21.586 Test: blockdev reset ...[2024-10-01 20:18:16.677218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:20:21.586 passed 00:20:21.586 Test: blockdev write read 8 blocks ...[2024-10-01 20:18:16.680144] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:21.586 passed 00:20:21.586 Test: blockdev write read size > 128k ...passed 00:20:21.586 Test: blockdev write read invalid size ...passed 00:20:21.586 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:21.586 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:21.586 Test: blockdev write read max offset ...passed 00:20:21.586 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:21.586 Test: blockdev writev readv 8 blocks ...passed 00:20:21.586 Test: blockdev writev readv 30 x 1block ...passed 00:20:21.586 Test: blockdev writev readv block ...passed 00:20:21.586 Test: blockdev writev readv size > 128k ...passed 00:20:21.586 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:21.586 Test: blockdev comparev and writev ...[2024-10-01 20:18:16.687470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2cb40e000 len:0x1000 00:20:21.586 [2024-10-01 20:18:16.687517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:21.586 passed 00:20:21.586 Test: blockdev nvme passthru rw ...passed 00:20:21.586 Test: blockdev nvme passthru vendor specific ...passed 00:20:21.586 Test: blockdev nvme admin passthru ...passed 00:20:21.586 Test: blockdev copy ...passed 00:20:21.586 Suite: bdevio tests on: Nvme0n1 00:20:21.586 Test: blockdev write read block ...passed 00:20:21.586 Test: blockdev write zeroes read block ...passed 00:20:21.586 Test: blockdev write zeroes read no split ...passed 00:20:21.586 Test: blockdev write zeroes read split ...passed 00:20:21.586 Test: blockdev write zeroes read split partial ...passed 00:20:21.586 Test: blockdev reset ...[2024-10-01 20:18:16.740045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:20:21.586 passed 00:20:21.586 Test: blockdev write read 8 blocks ...[2024-10-01 20:18:16.742766] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:21.586 passed 00:20:21.586 Test: blockdev write read size > 128k ...passed 00:20:21.586 Test: blockdev write read invalid size ...passed 00:20:21.586 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:21.586 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:21.586 Test: blockdev write read max offset ...passed 00:20:21.586 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:21.586 Test: blockdev writev readv 8 blocks ...passed 00:20:21.586 Test: blockdev writev readv 30 x 1block ...passed 00:20:21.586 Test: blockdev writev readv block ...passed 00:20:21.586 Test: blockdev writev readv size > 128k ...passed 00:20:21.586 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:21.586 Test: blockdev comparev and writev ...[2024-10-01 20:18:16.748011] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:20:21.586 separate metadata which is not supported yet. 00:20:21.586 passed 00:20:21.586 Test: blockdev nvme passthru rw ...passed 00:20:21.586 Test: blockdev nvme passthru vendor specific ...[2024-10-01 20:18:16.748481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:20:21.586 [2024-10-01 20:18:16.748527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:20:21.586 passed 00:20:21.586 Test: blockdev nvme admin passthru ...passed 00:20:21.586 Test: blockdev copy ...passed 00:20:21.586 00:20:21.586 Run Summary: Type Total Ran Passed Failed Inactive 00:20:21.586 suites 7 7 n/a 0 0 00:20:21.586 tests 161 161 161 0 0 00:20:21.587 asserts 1025 1025 1025 0 n/a 00:20:21.587 00:20:21.587 Elapsed time = 1.134 seconds 00:20:21.587 0 00:20:21.587 20:18:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62024 00:20:21.587 20:18:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 62024 ']' 00:20:21.587 20:18:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 62024 00:20:21.587 20:18:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:20:21.587 20:18:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.587 20:18:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62024 00:20:21.587 20:18:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:21.587 20:18:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:21.587 killing process with pid 62024 00:20:21.587 20:18:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62024' 00:20:21.587 20:18:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 62024 00:20:21.587 20:18:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 62024 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:22.981 00:20:22.981 real 0m2.879s 00:20:22.981 user 0m7.379s 00:20:22.981 sys 0m0.375s 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:22.981 ************************************ 00:20:22.981 END TEST bdev_bounds 00:20:22.981 ************************************ 00:20:22.981 20:18:17 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:20:22.981 20:18:17 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:22.981 20:18:17 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:22.981 20:18:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:22.981 ************************************ 00:20:22.981 START TEST bdev_nbd 00:20:22.981 ************************************ 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62084 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62084 /var/tmp/spdk-nbd.sock 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 62084 ']' 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:22.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:22.981 20:18:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:22.981 [2024-10-01 20:18:18.049610] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:20:22.981 [2024-10-01 20:18:18.049737] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.240 [2024-10-01 20:18:18.198352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.240 [2024-10-01 20:18:18.393599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:24.182 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:24.439 1+0 records in 00:20:24.439 1+0 records out 00:20:24.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350358 s, 11.7 MB/s 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:20:24.439 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:24.697 1+0 records in 00:20:24.697 1+0 records out 00:20:24.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471587 s, 8.7 MB/s 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:24.697 1+0 records in 00:20:24.697 1+0 records out 00:20:24.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460924 s, 8.9 MB/s 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:24.697 20:18:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:24.955 1+0 records in 00:20:24.955 1+0 records out 00:20:24.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548592 s, 7.5 MB/s 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:24.955 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:25.213 1+0 records in 00:20:25.213 1+0 records out 00:20:25.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046535 s, 8.8 MB/s 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:25.213 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:25.470 1+0 records in 00:20:25.470 1+0 records out 00:20:25.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504169 s, 8.1 MB/s 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:25.470 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:25.737 1+0 records in 00:20:25.737 1+0 records out 00:20:25.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447546 s, 9.2 MB/s 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:25.737 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:26.005 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd0", 00:20:26.005 "bdev_name": "Nvme0n1" 00:20:26.005 }, 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd1", 00:20:26.005 "bdev_name": "Nvme1n1p1" 00:20:26.005 }, 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd2", 00:20:26.005 "bdev_name": "Nvme1n1p2" 00:20:26.005 }, 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd3", 00:20:26.005 "bdev_name": "Nvme2n1" 00:20:26.005 }, 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd4", 00:20:26.005 "bdev_name": "Nvme2n2" 00:20:26.005 }, 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd5", 00:20:26.005 "bdev_name": "Nvme2n3" 00:20:26.005 }, 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd6", 00:20:26.005 "bdev_name": "Nvme3n1" 00:20:26.005 } 00:20:26.005 ]' 00:20:26.005 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:26.005 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd0", 00:20:26.005 "bdev_name": "Nvme0n1" 00:20:26.005 }, 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd1", 00:20:26.005 "bdev_name": "Nvme1n1p1" 00:20:26.005 }, 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd2", 00:20:26.005 "bdev_name": "Nvme1n1p2" 00:20:26.005 }, 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd3", 00:20:26.005 "bdev_name": "Nvme2n1" 00:20:26.005 }, 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd4", 00:20:26.005 "bdev_name": "Nvme2n2" 00:20:26.005 }, 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd5", 00:20:26.005 "bdev_name": "Nvme2n3" 00:20:26.005 }, 00:20:26.005 { 00:20:26.005 "nbd_device": "/dev/nbd6", 00:20:26.005 "bdev_name": "Nvme3n1" 00:20:26.005 } 00:20:26.005 ]' 00:20:26.005 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:26.005 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:20:26.005 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:26.005 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:20:26.005 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:26.005 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:26.005 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:26.005 20:18:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:26.005 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:26.006 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:26.006 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:26.006 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:26.006 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:26.006 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:26.006 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:26.006 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:26.006 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:26.006 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:26.277 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:26.277 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:26.277 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:26.277 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:26.277 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:26.277 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:26.277 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:26.277 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:26.277 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:26.277 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:20:26.535 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:20:26.535 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:20:26.535 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:20:26.535 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:26.535 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:26.535 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:20:26.535 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:26.535 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:26.535 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:26.535 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:20:26.793 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:26.794 20:18:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:20:27.051 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:20:27.051 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:20:27.051 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:20:27.051 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:27.051 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:27.051 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:20:27.051 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:27.051 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:27.051 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:27.051 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:20:27.308 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:20:27.308 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:20:27.308 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:20:27.308 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:27.308 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:27.308 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:20:27.308 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:27.308 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:27.308 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:27.308 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:27.308 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:27.567 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:20:27.825 /dev/nbd0 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:27.825 1+0 records in 00:20:27.825 1+0 records out 00:20:27.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513867 s, 8.0 MB/s 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:27.825 20:18:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:20:28.085 /dev/nbd1 00:20:28.085 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:28.085 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:28.085 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:28.085 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:28.085 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:28.085 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:28.085 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:28.086 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:28.086 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:28.086 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:28.086 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:28.086 1+0 records in 00:20:28.086 1+0 records out 00:20:28.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402672 s, 10.2 MB/s 00:20:28.086 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.086 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:28.086 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.086 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:28.086 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:28.086 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:28.086 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:28.086 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:20:28.346 /dev/nbd10 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:28.346 1+0 records in 00:20:28.346 1+0 records out 00:20:28.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491821 s, 8.3 MB/s 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:28.346 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:20:28.604 /dev/nbd11 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:28.604 1+0 records in 00:20:28.604 1+0 records out 00:20:28.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478011 s, 8.6 MB/s 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:28.604 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:20:28.862 /dev/nbd12 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:28.862 1+0 records in 00:20:28.862 1+0 records out 00:20:28.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427795 s, 9.6 MB/s 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:28.862 20:18:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:20:29.121 /dev/nbd13 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:29.121 1+0 records in 00:20:29.121 1+0 records out 00:20:29.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458202 s, 8.9 MB/s 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:29.121 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:20:29.379 /dev/nbd14 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:29.379 1+0 records in 00:20:29.379 1+0 records out 00:20:29.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495655 s, 8.3 MB/s 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd0", 00:20:29.379 "bdev_name": "Nvme0n1" 00:20:29.379 }, 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd1", 00:20:29.379 "bdev_name": "Nvme1n1p1" 00:20:29.379 }, 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd10", 00:20:29.379 "bdev_name": "Nvme1n1p2" 00:20:29.379 }, 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd11", 00:20:29.379 "bdev_name": "Nvme2n1" 00:20:29.379 }, 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd12", 00:20:29.379 "bdev_name": "Nvme2n2" 00:20:29.379 }, 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd13", 00:20:29.379 "bdev_name": "Nvme2n3" 00:20:29.379 }, 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd14", 00:20:29.379 "bdev_name": "Nvme3n1" 00:20:29.379 } 00:20:29.379 ]' 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:29.379 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd0", 00:20:29.379 "bdev_name": "Nvme0n1" 00:20:29.379 }, 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd1", 00:20:29.379 "bdev_name": "Nvme1n1p1" 00:20:29.379 }, 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd10", 00:20:29.379 "bdev_name": "Nvme1n1p2" 00:20:29.379 }, 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd11", 00:20:29.379 "bdev_name": "Nvme2n1" 00:20:29.379 }, 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd12", 00:20:29.379 "bdev_name": "Nvme2n2" 00:20:29.379 }, 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd13", 00:20:29.379 "bdev_name": "Nvme2n3" 00:20:29.379 }, 00:20:29.379 { 00:20:29.379 "nbd_device": "/dev/nbd14", 00:20:29.379 "bdev_name": "Nvme3n1" 00:20:29.379 } 00:20:29.379 ]' 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:29.637 /dev/nbd1 00:20:29.637 /dev/nbd10 00:20:29.637 /dev/nbd11 00:20:29.637 /dev/nbd12 00:20:29.637 /dev/nbd13 00:20:29.637 /dev/nbd14' 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:29.637 /dev/nbd1 00:20:29.637 /dev/nbd10 00:20:29.637 /dev/nbd11 00:20:29.637 /dev/nbd12 00:20:29.637 /dev/nbd13 00:20:29.637 /dev/nbd14' 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:29.637 256+0 records in 00:20:29.637 256+0 records out 00:20:29.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00913482 s, 115 MB/s 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:29.637 20:18:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:30.212 256+0 records in 00:20:30.212 256+0 records out 00:20:30.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.640928 s, 1.6 MB/s 00:20:30.212 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:30.212 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:30.212 256+0 records in 00:20:30.212 256+0 records out 00:20:30.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.077616 s, 13.5 MB/s 00:20:30.212 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:30.212 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:20:30.471 256+0 records in 00:20:30.471 256+0 records out 00:20:30.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0786313 s, 13.3 MB/s 00:20:30.471 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:30.471 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:20:30.471 256+0 records in 00:20:30.471 256+0 records out 00:20:30.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.077135 s, 13.6 MB/s 00:20:30.471 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:30.471 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:20:30.471 256+0 records in 00:20:30.471 256+0 records out 00:20:30.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0763304 s, 13.7 MB/s 00:20:30.471 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:30.471 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:20:30.729 256+0 records in 00:20:30.729 256+0 records out 00:20:30.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0798494 s, 13.1 MB/s 00:20:30.729 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:30.729 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:20:30.729 256+0 records in 00:20:30.729 256+0 records out 00:20:30.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0770706 s, 13.6 MB/s 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:30.730 20:18:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:30.988 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:30.988 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:30.988 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:30.988 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:30.988 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:30.988 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:30.988 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:30.988 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:30.988 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:30.988 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:31.247 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:31.247 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:31.247 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:31.247 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:31.247 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:31.247 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:31.247 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:31.247 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:31.247 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:31.247 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:31.504 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:20:31.762 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:20:31.762 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:20:31.762 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:20:31.762 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:31.762 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:31.762 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:20:31.762 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:31.762 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:31.762 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:31.762 20:18:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:20:32.020 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:20:32.020 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:20:32.020 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:20:32.020 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.020 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.020 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:20:32.020 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:32.020 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:32.020 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:32.020 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:20:32.277 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:20:32.277 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:20:32.277 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:20:32.277 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.277 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.277 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:20:32.277 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:32.277 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:32.277 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:32.277 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:32.277 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:32.535 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:32.793 malloc_lvol_verify 00:20:32.793 20:18:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:33.050 ff45d4f7-b30f-4dbc-ad90-54965988d565 00:20:33.050 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:33.050 a470d860-7ea5-48c9-bd26-b029704dc849 00:20:33.050 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:33.308 /dev/nbd0 00:20:33.308 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:33.308 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:33.308 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:33.308 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:33.308 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:33.308 mke2fs 1.47.0 (5-Feb-2023) 00:20:33.308 Discarding device blocks: 0/4096 done 00:20:33.308 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:33.308 00:20:33.308 Allocating group tables: 0/1 done 00:20:33.308 Writing inode tables: 0/1 done 00:20:33.308 Creating journal (1024 blocks): done 00:20:33.308 Writing superblocks and filesystem accounting information: 0/1 done 00:20:33.308 00:20:33.308 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:33.308 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:33.308 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:33.308 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:33.308 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:33.308 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:33.308 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62084 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 62084 ']' 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 62084 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62084 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:33.565 killing process with pid 62084 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62084' 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 62084 00:20:33.565 20:18:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 62084 00:20:34.937 20:18:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:34.937 00:20:34.937 real 0m12.033s 00:20:34.937 user 0m16.473s 00:20:34.937 sys 0m3.678s 00:20:34.937 20:18:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:34.937 ************************************ 00:20:34.937 END TEST bdev_nbd 00:20:34.937 ************************************ 00:20:34.937 20:18:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:34.937 20:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:34.937 20:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:20:34.937 20:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:20:34.937 skipping fio tests on NVMe due to multi-ns failures. 00:20:34.937 20:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:20:34.937 20:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:34.937 20:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:34.937 20:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:20:34.937 20:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:34.937 20:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:34.937 ************************************ 00:20:34.937 START TEST bdev_verify 00:20:34.937 ************************************ 00:20:34.937 20:18:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:34.937 [2024-10-01 20:18:30.145139] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:20:34.937 [2024-10-01 20:18:30.145323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62510 ] 00:20:35.194 [2024-10-01 20:18:30.313374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:35.452 [2024-10-01 20:18:30.508802] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.452 [2024-10-01 20:18:30.509126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.385 Running I/O for 5 seconds... 00:20:41.544 21376.00 IOPS, 83.50 MiB/s 23680.00 IOPS, 92.50 MiB/s 24597.33 IOPS, 96.08 MiB/s 23840.00 IOPS, 93.12 MiB/s 23564.80 IOPS, 92.05 MiB/s 00:20:41.544 Latency(us) 00:20:41.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.544 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:41.544 Verification LBA range: start 0x0 length 0xbd0bd 00:20:41.544 Nvme0n1 : 5.04 1598.94 6.25 0.00 0.00 79691.99 15224.52 79853.10 00:20:41.544 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:41.544 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:20:41.544 Nvme0n1 : 5.04 1700.25 6.64 0.00 0.00 74946.39 15123.69 85095.98 00:20:41.544 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:41.544 Verification LBA range: start 0x0 length 0x4ff80 00:20:41.544 Nvme1n1p1 : 5.08 1600.76 6.25 0.00 0.00 79474.23 7309.78 72190.42 00:20:41.544 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:41.544 Verification LBA range: start 0x4ff80 length 0x4ff80 00:20:41.544 Nvme1n1p1 : 5.08 1701.42 6.65 0.00 0.00 74652.22 9326.28 72593.72 00:20:41.544 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:41.544 Verification LBA range: start 0x0 length 0x4ff7f 00:20:41.544 Nvme1n1p2 : 5.08 1599.96 6.25 0.00 0.00 79387.14 8318.03 71383.83 00:20:41.544 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:41.544 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:20:41.544 Nvme1n1p2 : 5.09 1710.16 6.68 0.00 0.00 74321.25 8065.97 67754.14 00:20:41.544 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:41.544 Verification LBA range: start 0x0 length 0x80000 00:20:41.544 Nvme2n1 : 5.09 1608.50 6.28 0.00 0.00 79063.02 9477.51 71383.83 00:20:41.544 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:41.544 Verification LBA range: start 0x80000 length 0x80000 00:20:41.544 Nvme2n1 : 5.09 1709.74 6.68 0.00 0.00 74174.63 8368.44 66947.54 00:20:41.544 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:41.544 Verification LBA range: start 0x0 length 0x80000 00:20:41.545 Nvme2n2 : 5.10 1607.48 6.28 0.00 0.00 78946.47 10788.23 71787.13 00:20:41.545 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:41.545 Verification LBA range: start 0x80000 length 0x80000 00:20:41.545 Nvme2n2 : 5.09 1709.33 6.68 0.00 0.00 74030.94 8620.50 70173.93 00:20:41.545 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:41.545 Verification LBA range: start 0x0 length 0x80000 00:20:41.545 Nvme2n3 : 5.10 1606.35 6.27 0.00 0.00 78819.54 13308.85 71383.83 00:20:41.545 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:41.545 Verification LBA range: start 0x80000 length 0x80000 00:20:41.545 Nvme2n3 : 5.09 1708.95 6.68 0.00 0.00 73877.55 8822.15 72190.42 00:20:41.545 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:41.545 Verification LBA range: start 0x0 length 0x20000 00:20:41.545 Nvme3n1 : 5.10 1605.92 6.27 0.00 0.00 78657.04 11695.66 73803.62 00:20:41.545 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:41.545 Verification LBA range: start 0x20000 length 0x20000 00:20:41.545 Nvme3n1 : 5.10 1707.92 6.67 0.00 0.00 73805.88 9477.51 75013.51 00:20:41.545 =================================================================================================================== 00:20:41.545 Total : 23175.69 90.53 0.00 0.00 76625.89 7309.78 85095.98 00:20:43.442 00:20:43.442 real 0m8.139s 00:20:43.442 user 0m14.996s 00:20:43.442 sys 0m0.324s 00:20:43.442 20:18:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:43.442 20:18:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:43.442 ************************************ 00:20:43.442 END TEST bdev_verify 00:20:43.442 ************************************ 00:20:43.442 20:18:38 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:43.442 20:18:38 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:20:43.442 20:18:38 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:43.442 20:18:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:43.442 ************************************ 00:20:43.442 START TEST bdev_verify_big_io 00:20:43.442 ************************************ 00:20:43.442 20:18:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:43.442 [2024-10-01 20:18:38.300106] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:20:43.442 [2024-10-01 20:18:38.300234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62619 ] 00:20:43.442 [2024-10-01 20:18:38.451478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:43.442 [2024-10-01 20:18:38.643217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.442 [2024-10-01 20:18:38.643336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.376 Running I/O for 5 seconds... 00:20:50.471 1072.00 IOPS, 67.00 MiB/s 2500.50 IOPS, 156.28 MiB/s 3115.00 IOPS, 194.69 MiB/s 00:20:50.471 Latency(us) 00:20:50.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.471 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0x0 length 0xbd0b 00:20:50.471 Nvme0n1 : 5.78 101.68 6.36 0.00 0.00 1176949.40 18450.90 1664816.05 00:20:50.471 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0xbd0b length 0xbd0b 00:20:50.471 Nvme0n1 : 5.78 98.29 6.14 0.00 0.00 1234450.08 26416.05 1271196.75 00:20:50.471 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0x0 length 0x4ff8 00:20:50.471 Nvme1n1p1 : 5.78 106.84 6.68 0.00 0.00 1107617.95 35691.91 1690627.15 00:20:50.471 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0x4ff8 length 0x4ff8 00:20:50.471 Nvme1n1p1 : 5.88 100.42 6.28 0.00 0.00 1193811.30 85902.57 1426063.36 00:20:50.471 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0x0 length 0x4ff7 00:20:50.471 Nvme1n1p2 : 5.87 111.34 6.96 0.00 0.00 1036785.69 54041.99 1716438.25 00:20:50.471 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0x4ff7 length 0x4ff7 00:20:50.471 Nvme1n1p2 : 6.11 68.10 4.26 0.00 0.00 1689696.46 180677.71 2413337.99 00:20:50.471 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0x0 length 0x8000 00:20:50.471 Nvme2n1 : 6.00 116.11 7.26 0.00 0.00 961264.49 63721.16 1742249.35 00:20:50.471 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0x8000 length 0x8000 00:20:50.471 Nvme2n1 : 6.06 107.15 6.70 0.00 0.00 1044438.00 87919.06 1232480.10 00:20:50.471 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0x0 length 0x8000 00:20:50.471 Nvme2n2 : 6.05 119.14 7.45 0.00 0.00 904977.97 62914.56 1768060.46 00:20:50.471 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0x8000 length 0x8000 00:20:50.471 Nvme2n2 : 6.06 110.72 6.92 0.00 0.00 987441.72 90742.15 1226027.32 00:20:50.471 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0x0 length 0x8000 00:20:50.471 Nvme2n3 : 6.09 137.73 8.61 0.00 0.00 764023.34 26819.35 1238932.87 00:20:50.471 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0x8000 length 0x8000 00:20:50.471 Nvme2n3 : 6.11 121.59 7.60 0.00 0.00 878214.02 19459.15 1238932.87 00:20:50.471 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0x0 length 0x2000 00:20:50.471 Nvme3n1 : 6.17 175.08 10.94 0.00 0.00 583956.50 297.75 1845493.76 00:20:50.471 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:50.471 Verification LBA range: start 0x2000 length 0x2000 00:20:50.471 Nvme3n1 : 6.12 130.00 8.13 0.00 0.00 795582.73 1978.68 1264743.98 00:20:50.471 =================================================================================================================== 00:20:50.471 Total : 1604.19 100.26 0.00 0.00 974414.38 297.75 2413337.99 00:20:53.061 00:20:53.061 real 0m9.638s 00:20:53.061 user 0m18.092s 00:20:53.061 sys 0m0.332s 00:20:53.061 20:18:47 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:53.061 20:18:47 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:53.061 ************************************ 00:20:53.061 END TEST bdev_verify_big_io 00:20:53.061 ************************************ 00:20:53.061 20:18:47 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:53.061 20:18:47 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:53.061 20:18:47 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:53.061 20:18:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:53.061 ************************************ 00:20:53.061 START TEST bdev_write_zeroes 00:20:53.061 ************************************ 00:20:53.061 20:18:47 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:53.061 [2024-10-01 20:18:47.961732] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:20:53.061 [2024-10-01 20:18:47.961840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62738 ] 00:20:53.061 [2024-10-01 20:18:48.106286] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.318 [2024-10-01 20:18:48.297731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.886 Running I/O for 1 seconds... 00:20:55.257 65408.00 IOPS, 255.50 MiB/s 00:20:55.257 Latency(us) 00:20:55.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.257 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:55.257 Nvme0n1 : 1.02 9312.75 36.38 0.00 0.00 13711.53 6906.49 26214.40 00:20:55.257 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:55.257 Nvme1n1p1 : 1.03 9296.38 36.31 0.00 0.00 13712.15 10838.65 26012.75 00:20:55.257 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:55.257 Nvme1n1p2 : 1.03 9281.71 36.26 0.00 0.00 13706.83 10737.82 24903.68 00:20:55.257 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:55.257 Nvme2n1 : 1.03 9267.29 36.20 0.00 0.00 13693.66 8973.39 24298.73 00:20:55.257 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:55.257 Nvme2n2 : 1.03 9256.22 36.16 0.00 0.00 13675.79 8065.97 24097.08 00:20:55.257 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:55.257 Nvme2n3 : 1.03 9245.29 36.11 0.00 0.00 13661.30 7057.72 25105.33 00:20:55.257 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:55.257 Nvme3n1 : 1.03 9234.52 36.07 0.00 0.00 13655.24 6805.66 26416.05 00:20:55.257 =================================================================================================================== 00:20:55.257 Total : 64894.17 253.49 0.00 0.00 13688.07 6805.66 26416.05 00:20:56.190 ************************************ 00:20:56.190 END TEST bdev_write_zeroes 00:20:56.190 ************************************ 00:20:56.190 00:20:56.190 real 0m3.470s 00:20:56.190 user 0m3.105s 00:20:56.190 sys 0m0.236s 00:20:56.190 20:18:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:56.191 20:18:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:56.449 20:18:51 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:56.449 20:18:51 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:56.449 20:18:51 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:56.449 20:18:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:56.449 ************************************ 00:20:56.449 START TEST bdev_json_nonenclosed 00:20:56.449 ************************************ 00:20:56.449 20:18:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:56.449 [2024-10-01 20:18:51.478154] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:20:56.449 [2024-10-01 20:18:51.478281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62792 ] 00:20:56.449 [2024-10-01 20:18:51.628183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.706 [2024-10-01 20:18:51.819469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.706 [2024-10-01 20:18:51.819559] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:56.706 [2024-10-01 20:18:51.819576] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:56.706 [2024-10-01 20:18:51.819585] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:56.964 00:20:56.964 real 0m0.700s 00:20:56.964 user 0m0.488s 00:20:56.964 sys 0m0.107s 00:20:56.964 20:18:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:56.964 20:18:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:56.964 ************************************ 00:20:56.964 END TEST bdev_json_nonenclosed 00:20:56.964 ************************************ 00:20:56.964 20:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:56.964 20:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:56.964 20:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:56.964 20:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:56.964 ************************************ 00:20:56.964 START TEST bdev_json_nonarray 00:20:56.964 ************************************ 00:20:56.964 20:18:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:57.222 [2024-10-01 20:18:52.213187] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:20:57.222 [2024-10-01 20:18:52.213311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62823 ] 00:20:57.222 [2024-10-01 20:18:52.363558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.480 [2024-10-01 20:18:52.552929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.480 [2024-10-01 20:18:52.553018] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:57.480 [2024-10-01 20:18:52.553036] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:57.480 [2024-10-01 20:18:52.553045] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:57.740 00:20:57.740 real 0m0.693s 00:20:57.740 user 0m0.488s 00:20:57.740 sys 0m0.099s 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:57.740 ************************************ 00:20:57.740 END TEST bdev_json_nonarray 00:20:57.740 ************************************ 00:20:57.740 20:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:20:57.740 20:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:20:57.740 20:18:52 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:20:57.740 20:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:57.740 20:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:57.740 20:18:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:57.740 ************************************ 00:20:57.740 START TEST bdev_gpt_uuid 00:20:57.740 ************************************ 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62843 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62843 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 62843 ']' 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:57.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:57.740 20:18:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:57.999 [2024-10-01 20:18:52.961821] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:20:57.999 [2024-10-01 20:18:52.961957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62843 ] 00:20:57.999 [2024-10-01 20:18:53.108491] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.329 [2024-10-01 20:18:53.298688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.293 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.293 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:20:59.293 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:59.293 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.293 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:59.552 Some configs were skipped because the RPC state that can call them passed over. 00:20:59.552 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.552 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:20:59.552 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.552 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:59.552 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.552 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:20:59.552 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.552 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:59.552 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.552 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:20:59.552 { 00:20:59.552 "name": "Nvme1n1p1", 00:20:59.552 "aliases": [ 00:20:59.552 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:20:59.552 ], 00:20:59.552 "product_name": "GPT Disk", 00:20:59.552 "block_size": 4096, 00:20:59.552 "num_blocks": 655104, 00:20:59.552 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:20:59.552 "assigned_rate_limits": { 00:20:59.552 "rw_ios_per_sec": 0, 00:20:59.552 "rw_mbytes_per_sec": 0, 00:20:59.552 "r_mbytes_per_sec": 0, 00:20:59.552 "w_mbytes_per_sec": 0 00:20:59.552 }, 00:20:59.552 "claimed": false, 00:20:59.552 "zoned": false, 00:20:59.552 "supported_io_types": { 00:20:59.552 "read": true, 00:20:59.552 "write": true, 00:20:59.552 "unmap": true, 00:20:59.552 "flush": true, 00:20:59.552 "reset": true, 00:20:59.552 "nvme_admin": false, 00:20:59.552 "nvme_io": false, 00:20:59.552 "nvme_io_md": false, 00:20:59.552 "write_zeroes": true, 00:20:59.552 "zcopy": false, 00:20:59.552 "get_zone_info": false, 00:20:59.552 "zone_management": false, 00:20:59.552 "zone_append": false, 00:20:59.552 "compare": true, 00:20:59.552 "compare_and_write": false, 00:20:59.552 "abort": true, 00:20:59.552 "seek_hole": false, 00:20:59.552 "seek_data": false, 00:20:59.552 "copy": true, 00:20:59.552 "nvme_iov_md": false 00:20:59.552 }, 00:20:59.552 "driver_specific": { 00:20:59.552 "gpt": { 00:20:59.552 "base_bdev": "Nvme1n1", 00:20:59.552 "offset_blocks": 256, 00:20:59.552 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:20:59.552 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:20:59.552 "partition_name": "SPDK_TEST_first" 00:20:59.552 } 00:20:59.552 } 00:20:59.552 } 00:20:59.552 ]' 00:20:59.552 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:20:59.552 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:20:59.553 { 00:20:59.553 "name": "Nvme1n1p2", 00:20:59.553 "aliases": [ 00:20:59.553 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:20:59.553 ], 00:20:59.553 "product_name": "GPT Disk", 00:20:59.553 "block_size": 4096, 00:20:59.553 "num_blocks": 655103, 00:20:59.553 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:20:59.553 "assigned_rate_limits": { 00:20:59.553 "rw_ios_per_sec": 0, 00:20:59.553 "rw_mbytes_per_sec": 0, 00:20:59.553 "r_mbytes_per_sec": 0, 00:20:59.553 "w_mbytes_per_sec": 0 00:20:59.553 }, 00:20:59.553 "claimed": false, 00:20:59.553 "zoned": false, 00:20:59.553 "supported_io_types": { 00:20:59.553 "read": true, 00:20:59.553 "write": true, 00:20:59.553 "unmap": true, 00:20:59.553 "flush": true, 00:20:59.553 "reset": true, 00:20:59.553 "nvme_admin": false, 00:20:59.553 "nvme_io": false, 00:20:59.553 "nvme_io_md": false, 00:20:59.553 "write_zeroes": true, 00:20:59.553 "zcopy": false, 00:20:59.553 "get_zone_info": false, 00:20:59.553 "zone_management": false, 00:20:59.553 "zone_append": false, 00:20:59.553 "compare": true, 00:20:59.553 "compare_and_write": false, 00:20:59.553 "abort": true, 00:20:59.553 "seek_hole": false, 00:20:59.553 "seek_data": false, 00:20:59.553 "copy": true, 00:20:59.553 "nvme_iov_md": false 00:20:59.553 }, 00:20:59.553 "driver_specific": { 00:20:59.553 "gpt": { 00:20:59.553 "base_bdev": "Nvme1n1", 00:20:59.553 "offset_blocks": 655360, 00:20:59.553 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:20:59.553 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:20:59.553 "partition_name": "SPDK_TEST_second" 00:20:59.553 } 00:20:59.553 } 00:20:59.553 } 00:20:59.553 ]' 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62843 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 62843 ']' 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 62843 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:59.553 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62843 00:20:59.811 killing process with pid 62843 00:20:59.811 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:59.811 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:59.811 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62843' 00:20:59.811 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 62843 00:20:59.811 20:18:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 62843 00:21:01.709 00:21:01.709 real 0m3.925s 00:21:01.709 user 0m3.921s 00:21:01.709 sys 0m0.454s 00:21:01.709 20:18:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:01.709 ************************************ 00:21:01.709 END TEST bdev_gpt_uuid 00:21:01.709 ************************************ 00:21:01.709 20:18:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:21:01.709 20:18:56 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:21:01.709 20:18:56 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:01.709 20:18:56 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:21:01.709 20:18:56 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:01.709 20:18:56 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:01.709 20:18:56 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:21:01.709 20:18:56 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:21:01.709 20:18:56 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:21:01.709 20:18:56 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:01.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:02.225 Waiting for block devices as requested 00:21:02.225 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:02.225 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:02.225 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:21:02.482 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:21:07.746 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:21:07.746 20:19:02 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:21:07.746 20:19:02 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:21:07.746 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:21:07.746 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:21:07.746 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:21:07.746 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:21:07.746 20:19:02 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:21:07.746 00:21:07.746 real 1m4.464s 00:21:07.746 user 1m22.113s 00:21:07.746 sys 0m8.305s 00:21:07.746 20:19:02 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:07.746 ************************************ 00:21:07.746 END TEST blockdev_nvme_gpt 00:21:07.746 20:19:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:07.746 ************************************ 00:21:07.746 20:19:02 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:21:07.746 20:19:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:07.746 20:19:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:07.746 20:19:02 -- common/autotest_common.sh@10 -- # set +x 00:21:07.746 ************************************ 00:21:07.746 START TEST nvme 00:21:07.746 ************************************ 00:21:07.746 20:19:02 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:21:07.746 * Looking for test storage... 00:21:07.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:07.746 20:19:02 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:07.746 20:19:02 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:07.746 20:19:02 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:21:08.008 20:19:02 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:08.008 20:19:02 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:08.008 20:19:02 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:08.008 20:19:02 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:08.008 20:19:02 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:21:08.008 20:19:02 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:21:08.008 20:19:02 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:21:08.008 20:19:02 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:21:08.008 20:19:02 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:21:08.008 20:19:02 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:21:08.008 20:19:02 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:21:08.008 20:19:02 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:08.008 20:19:02 nvme -- scripts/common.sh@344 -- # case "$op" in 00:21:08.008 20:19:02 nvme -- scripts/common.sh@345 -- # : 1 00:21:08.008 20:19:02 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:08.008 20:19:02 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.008 20:19:02 nvme -- scripts/common.sh@365 -- # decimal 1 00:21:08.008 20:19:02 nvme -- scripts/common.sh@353 -- # local d=1 00:21:08.008 20:19:02 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:08.008 20:19:02 nvme -- scripts/common.sh@355 -- # echo 1 00:21:08.008 20:19:02 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:21:08.008 20:19:02 nvme -- scripts/common.sh@366 -- # decimal 2 00:21:08.008 20:19:02 nvme -- scripts/common.sh@353 -- # local d=2 00:21:08.008 20:19:02 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:08.008 20:19:02 nvme -- scripts/common.sh@355 -- # echo 2 00:21:08.008 20:19:02 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:21:08.008 20:19:02 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:08.008 20:19:02 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:08.008 20:19:02 nvme -- scripts/common.sh@368 -- # return 0 00:21:08.008 20:19:02 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:08.008 20:19:02 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:08.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.008 --rc genhtml_branch_coverage=1 00:21:08.008 --rc genhtml_function_coverage=1 00:21:08.008 --rc genhtml_legend=1 00:21:08.008 --rc geninfo_all_blocks=1 00:21:08.008 --rc geninfo_unexecuted_blocks=1 00:21:08.008 00:21:08.008 ' 00:21:08.008 20:19:02 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:08.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.008 --rc genhtml_branch_coverage=1 00:21:08.008 --rc genhtml_function_coverage=1 00:21:08.008 --rc genhtml_legend=1 00:21:08.008 --rc geninfo_all_blocks=1 00:21:08.008 --rc geninfo_unexecuted_blocks=1 00:21:08.008 00:21:08.008 ' 00:21:08.008 20:19:02 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:08.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.008 --rc genhtml_branch_coverage=1 00:21:08.008 --rc genhtml_function_coverage=1 00:21:08.008 --rc genhtml_legend=1 00:21:08.008 --rc geninfo_all_blocks=1 00:21:08.008 --rc geninfo_unexecuted_blocks=1 00:21:08.008 00:21:08.008 ' 00:21:08.008 20:19:02 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:08.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.008 --rc genhtml_branch_coverage=1 00:21:08.008 --rc genhtml_function_coverage=1 00:21:08.008 --rc genhtml_legend=1 00:21:08.008 --rc geninfo_all_blocks=1 00:21:08.008 --rc geninfo_unexecuted_blocks=1 00:21:08.008 00:21:08.008 ' 00:21:08.008 20:19:02 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:08.267 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:08.833 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:08.833 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:08.833 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:08.834 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:08.834 20:19:03 nvme -- nvme/nvme.sh@79 -- # uname 00:21:08.834 20:19:03 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:21:08.834 20:19:03 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:21:08.834 20:19:03 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:21:08.834 20:19:03 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:21:08.834 20:19:03 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:21:08.834 20:19:03 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:21:08.834 Waiting for stub to ready for secondary processes... 00:21:08.834 20:19:03 nvme -- common/autotest_common.sh@1071 -- # stubpid=63493 00:21:08.834 20:19:03 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:21:08.834 20:19:03 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:21:08.834 20:19:03 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/63493 ]] 00:21:08.834 20:19:03 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:21:08.834 20:19:03 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:21:08.834 [2024-10-01 20:19:03.975483] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:21:08.834 [2024-10-01 20:19:03.975889] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:21:09.767 [2024-10-01 20:19:04.799651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:09.767 20:19:04 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:21:09.767 20:19:04 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/63493 ]] 00:21:09.767 20:19:04 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:21:09.767 [2024-10-01 20:19:04.976397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.767 [2024-10-01 20:19:04.976758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:09.767 [2024-10-01 20:19:04.976768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.025 [2024-10-01 20:19:04.989951] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:21:10.025 [2024-10-01 20:19:04.989991] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:21:10.025 [2024-10-01 20:19:04.999341] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:21:10.025 [2024-10-01 20:19:04.999631] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:21:10.026 [2024-10-01 20:19:05.001094] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:21:10.026 [2024-10-01 20:19:05.001526] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:21:10.026 [2024-10-01 20:19:05.001574] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:21:10.026 [2024-10-01 20:19:05.003545] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:21:10.026 [2024-10-01 20:19:05.003665] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:21:10.026 [2024-10-01 20:19:05.003726] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:21:10.026 [2024-10-01 20:19:05.006412] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:21:10.026 [2024-10-01 20:19:05.006652] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:21:10.026 [2024-10-01 20:19:05.006762] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:21:10.026 [2024-10-01 20:19:05.006825] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:21:10.026 [2024-10-01 20:19:05.006901] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:21:10.958 20:19:05 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:21:10.958 done. 00:21:10.958 20:19:05 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:21:10.958 20:19:05 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:21:10.958 20:19:05 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:21:10.958 20:19:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:10.958 20:19:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:10.958 ************************************ 00:21:10.958 START TEST nvme_reset 00:21:10.958 ************************************ 00:21:10.958 20:19:05 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:21:10.958 Initializing NVMe Controllers 00:21:10.958 Skipping QEMU NVMe SSD at 0000:00:10.0 00:21:10.958 Skipping QEMU NVMe SSD at 0000:00:11.0 00:21:10.958 Skipping QEMU NVMe SSD at 0000:00:13.0 00:21:10.958 Skipping QEMU NVMe SSD at 0000:00:12.0 00:21:10.959 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:21:11.217 00:21:11.217 real 0m0.216s 00:21:11.217 ************************************ 00:21:11.217 END TEST nvme_reset 00:21:11.217 ************************************ 00:21:11.217 user 0m0.060s 00:21:11.217 sys 0m0.106s 00:21:11.217 20:19:06 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:11.217 20:19:06 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:21:11.217 20:19:06 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:21:11.217 20:19:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:11.217 20:19:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:11.217 20:19:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:11.217 ************************************ 00:21:11.217 START TEST nvme_identify 00:21:11.217 ************************************ 00:21:11.217 20:19:06 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:21:11.217 20:19:06 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:21:11.217 20:19:06 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:21:11.217 20:19:06 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:21:11.217 20:19:06 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:21:11.217 20:19:06 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:21:11.217 20:19:06 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:21:11.217 20:19:06 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:11.217 20:19:06 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:11.217 20:19:06 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:21:11.217 20:19:06 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:21:11.217 20:19:06 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:21:11.217 20:19:06 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:21:11.478 ===================================================== 00:21:11.478 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:11.478 ===================================================== 00:21:11.478 Controller Capabilities/Features 00:21:11.478 ================================ 00:21:11.478 Vendor ID: 1b36 00:21:11.478 Subsystem Vendor ID: 1af4 00:21:11.478 Serial Number: 12340 00:21:11.478 Model Number: QEMU NVMe Ctrl 00:21:11.478 Firmware Version: 8.0.0 00:21:11.478 Recommended Arb Burst: 6 00:21:11.478 IEEE OUI Identifier: 00 54 52 00:21:11.478 Multi-path I/O 00:21:11.478 May have multiple subsystem ports: No 00:21:11.478 May have multiple controllers: No 00:21:11.478 Associated with SR-IOV VF: No 00:21:11.478 Max Data Transfer Size: 524288 00:21:11.478 Max Number of Namespaces: 256 00:21:11.478 Max Number of I/O Queues: 64 00:21:11.478 NVMe Specification Version (VS): 1.4 00:21:11.478 NVMe Specification Version (Identify): 1.4 00:21:11.478 Maximum Queue Entries: 2048 00:21:11.478 Contiguous Queues Required: Yes 00:21:11.478 Arbitration Mechanisms Supported 00:21:11.478 Weighted Round Robin: Not Supported 00:21:11.478 Vendor Specific: Not Supported 00:21:11.478 Reset Timeout: 7500 ms 00:21:11.478 Doorbell Stride: 4 bytes 00:21:11.478 NVM Subsystem Reset: Not Supported 00:21:11.478 Command Sets Supported 00:21:11.478 NVM Command Set: Supported 00:21:11.478 Boot Partition: Not Supported 00:21:11.478 Memory Page Size Minimum: 4096 bytes 00:21:11.478 Memory Page Size Maximum: 65536 bytes 00:21:11.478 Persistent Memory Region: Not Supported 00:21:11.478 Optional Asynchronous Events Supported 00:21:11.479 Namespace Attribute Notices: Supported 00:21:11.479 Firmware Activation Notices: Not Supported 00:21:11.479 ANA Change Notices: Not Supported 00:21:11.479 PLE Aggregate Log Change Notices: Not Supported 00:21:11.479 LBA Status Info Alert Notices: Not Supported 00:21:11.479 EGE Aggregate Log Change Notices: Not Supported 00:21:11.479 Normal NVM Subsystem Shutdown event: Not Supported 00:21:11.479 Zone Descriptor Change Notices: Not Supported 00:21:11.479 Discovery Log Change Notices: Not Supported 00:21:11.479 Controller Attributes 00:21:11.479 128-bit Host Identifier: Not Supported 00:21:11.479 Non-Operational Permissive Mode: Not Supported 00:21:11.479 NVM Sets: Not Supported 00:21:11.479 Read Recovery Levels: Not Supported 00:21:11.479 Endurance Groups: Not Supported 00:21:11.479 Predictable Latency Mode: Not Supported 00:21:11.479 Traffic Based Keep ALive: Not Supported 00:21:11.479 Namespace Granularity: Not Supported 00:21:11.479 SQ Associations: Not Supported 00:21:11.479 UUID List: Not Supported 00:21:11.479 Multi-Domain Subsystem: Not Supported 00:21:11.479 Fixed Capacity Management: Not Supported 00:21:11.479 Variable Capacity Management: Not Supported 00:21:11.479 Delete Endurance Group: Not Supported 00:21:11.479 Delete NVM Set: Not Supported 00:21:11.479 Extended LBA Formats Supported: Supported 00:21:11.479 Flexible Data Placement Supported: Not Supported 00:21:11.479 00:21:11.479 Controller Memory Buffer Support 00:21:11.479 ================================ 00:21:11.479 Supported: No 00:21:11.479 00:21:11.479 Persistent Memory Region Support 00:21:11.479 ================================ 00:21:11.479 Supported: No 00:21:11.479 00:21:11.479 Admin Command Set Attributes 00:21:11.479 ============================ 00:21:11.479 Security Send/Receive: Not Supported 00:21:11.479 Format NVM: Supported 00:21:11.479 Firmware Activate/Download: Not Supported 00:21:11.479 Namespace Management: Supported 00:21:11.479 Device Self-Test: Not Supported 00:21:11.479 Directives: Supported 00:21:11.479 NVMe-MI: Not Supported 00:21:11.479 Virtualization Management: Not Supported 00:21:11.479 Doorbell Buffer Config: Supported 00:21:11.479 Get LBA Status Capability: Not Supported 00:21:11.479 Command & Feature Lockdown Capability: Not Supported 00:21:11.479 Abort Command Limit: 4 00:21:11.479 Async Event Request Limit: 4 00:21:11.479 Number of Firmware Slots: N/A 00:21:11.479 Firmware Slot 1 Read-Only: N/A 00:21:11.479 Firmware Activation Without Reset: N/A 00:21:11.479 Multiple Update Detection Support: N/A 00:21:11.479 Firmware Update Granularity: No Information Provided 00:21:11.479 Per-Namespace SMART Log: Yes 00:21:11.479 Asymmetric Namespace Access Log Page: Not Supported 00:21:11.479 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:21:11.479 Command Effects Log Page: Supported 00:21:11.479 Get Log Page Extended Data: Supported 00:21:11.479 Telemetry Log Pages: Not Supported 00:21:11.479 Persistent Event Log Pages: Not Supported 00:21:11.479 Supported Log Pages Log Page: May Support 00:21:11.479 Commands Supported & Effects Log Page: Not Supported 00:21:11.479 Feature Identifiers & Effects Log Page:May Support 00:21:11.479 NVMe-MI Commands & Effects Log Page: May Support 00:21:11.479 Data Area 4 for Telemetry Log: Not Supported 00:21:11.479 Error Log Page Entries Supported: 1 00:21:11.479 Keep Alive: Not Supported 00:21:11.479 00:21:11.479 NVM Command Set Attributes 00:21:11.479 ========================== 00:21:11.479 Submission Queue Entry Size 00:21:11.479 Max: 64 00:21:11.479 Min: 64 00:21:11.479 Completion Queue Entry Size 00:21:11.479 Max: 16 00:21:11.479 Min: 16 00:21:11.479 Number of Namespaces: 256 00:21:11.479 Compare Command: Supported 00:21:11.479 Write Uncorrectable Command: Not Supported 00:21:11.479 Dataset Management Command: Supported 00:21:11.479 Write Zeroes Command: Supported 00:21:11.479 Set Features Save Field: Supported 00:21:11.479 Reservations: Not Supported 00:21:11.479 Timestamp: Supported 00:21:11.479 Copy: Supported 00:21:11.479 Volatile Write Cache: Present 00:21:11.479 Atomic Write Unit (Normal): 1 00:21:11.479 Atomic Write Unit (PFail): 1 00:21:11.479 Atomic Compare & Write Unit: 1 00:21:11.479 Fused Compare & Write: Not Supported 00:21:11.479 Scatter-Gather List 00:21:11.479 SGL Command Set: Supported 00:21:11.479 SGL Keyed: Not Supported 00:21:11.479 SGL Bit Bucket Descriptor: Not Supported 00:21:11.479 SGL Metadata Pointer: Not Supported 00:21:11.479 Oversized SGL: Not Supported 00:21:11.479 SGL Metadata Address: Not Supported 00:21:11.479 SGL Offset: Not Supported 00:21:11.479 Transport SGL Data Block: Not Supported 00:21:11.479 Replay Protected Memory Block: Not Supported 00:21:11.479 00:21:11.479 Firmware Slot Information 00:21:11.479 ========================= 00:21:11.479 Active slot: 1 00:21:11.479 Slot 1 Firmware Revision: 1.0 00:21:11.479 00:21:11.479 00:21:11.479 Commands Supported and Effects 00:21:11.479 ============================== 00:21:11.479 Admin Commands 00:21:11.479 -------------- 00:21:11.479 Delete I/O Submission Queue (00h): Supported 00:21:11.479 Create I/O Submission Queue (01h): Supported 00:21:11.479 Get Log Page (02h): Supported 00:21:11.479 Delete I/O Completion Queue (04h): Supported 00:21:11.479 Create I/O Completion Queue (05h): Supported 00:21:11.479 Identify (06h): Supported 00:21:11.479 Abort (08h): Supported 00:21:11.479 Set Features (09h): Supported 00:21:11.479 Get Features (0Ah): Supported 00:21:11.479 Asynchronous Event Request (0Ch): Supported 00:21:11.479 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:11.479 Directive Send (19h): Supported 00:21:11.479 Directive Receive (1Ah): Supported 00:21:11.479 Virtualization Management (1Ch): Supported 00:21:11.479 Doorbell Buffer Config (7Ch): Supported 00:21:11.479 Format NVM (80h): Supported LBA-Change 00:21:11.479 I/O Commands 00:21:11.479 ------------ 00:21:11.479 Flush (00h): Supported LBA-Change 00:21:11.479 Write (01h): Supported LBA-Change 00:21:11.479 Read (02h): Supported 00:21:11.479 Compare (05h): Supported 00:21:11.479 Write Zeroes (08h): Supported LBA-Change 00:21:11.479 Dataset Management (09h): Supported LBA-Change 00:21:11.479 Unknown (0Ch): Supported 00:21:11.479 Unknown (12h): Supported 00:21:11.479 Copy (19h): Supported LBA-Change 00:21:11.479 Unknown (1Dh): Supported LBA-Change 00:21:11.479 00:21:11.479 Error Log 00:21:11.479 ========= 00:21:11.479 00:21:11.479 Arbitration 00:21:11.479 =========== 00:21:11.479 Arbitration Burst: no limit 00:21:11.479 00:21:11.479 Power Management 00:21:11.479 ================ 00:21:11.479 Number of Power States: 1 00:21:11.479 Current Power State: Power State #0 00:21:11.479 Power State #0: 00:21:11.479 Max Power: 25.00 W 00:21:11.479 Non-Operational State: Operational 00:21:11.479 Entry Latency: 16 microseconds 00:21:11.479 Exit Latency: 4 microseconds 00:21:11.479 Relative Read Throughput: 0 00:21:11.479 Relative Read Latency: 0 00:21:11.479 Relative Write Throughput: 0 00:21:11.479 Relative Write Latency: 0 00:21:11.479 Idle Power: Not Reported 00:21:11.479 Active Power: Not Reported 00:21:11.479 Non-Operational Permissive Mode: Not Supported 00:21:11.479 00:21:11.479 Health Information 00:21:11.479 ================== 00:21:11.479 Critical Warnings: 00:21:11.479 Available Spare Space: OK 00:21:11.479 Temperature: OK 00:21:11.479 Device Reliability: OK 00:21:11.479 Read Only: No 00:21:11.479 Volatile Memory Backup: OK 00:21:11.479 Current Temperature: 323 Kelvin (50 Celsius) 00:21:11.479 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:11.479 Available Spare: 0% 00:21:11.479 Available Spare Threshold: 0% 00:21:11.479 Life Percentage Used: 0% 00:21:11.479 Data Units Read: 707 00:21:11.479 Data Units Written: 635 00:21:11.479 Host Read Commands: 39865 00:21:11.479 Host Write Commands: 39651 00:21:11.479 Controller Busy Time: 0 minutes 00:21:11.479 Power Cycles: 0 00:21:11.479 Power On Hours: 0 hours 00:21:11.479 Unsafe Shutdowns: 0 00:21:11.479 Unrecoverable Media Errors: 0 00:21:11.479 Lifetime Error Log Entries: 0 00:21:11.479 Warning Temperature Time: 0 minutes 00:21:11.479 Critical Temperature Time: 0 minutes 00:21:11.479 00:21:11.479 Number of Queues 00:21:11.479 ================ 00:21:11.479 Number of I/O Submission Queues: 64 00:21:11.479 Number of I/O Completion Queues: 64 00:21:11.479 00:21:11.479 ZNS Specific Controller Data 00:21:11.479 ============================ 00:21:11.479 Zone Append Size Limit: 0 00:21:11.479 00:21:11.479 00:21:11.479 Active Namespaces 00:21:11.479 ================= 00:21:11.479 Namespace ID:1 00:21:11.479 Error Recovery Timeout: Unlimited 00:21:11.479 Command Set Identifier: NVM (00h) 00:21:11.479 Deallocate: Supported 00:21:11.479 Deallocated/Unwritten Error: Supported 00:21:11.480 Deallocated Read Value: All 0x00 00:21:11.480 Deallocate in Write Zeroes: Not Supported 00:21:11.480 Deallocated Guard Field: 0xFFFF 00:21:11.480 Flush: Supported 00:21:11.480 Reservation: Not Supported 00:21:11.480 Metadata Transferred as: Separate Metadata Buffer 00:21:11.480 Namespace Sharing Capabilities: Private 00:21:11.480 Size (in LBAs): 1548666 (5GiB) 00:21:11.480 Capacity (in LBAs): 1548666 (5GiB) 00:21:11.480 Utilization (in LBAs): 1548666 (5GiB) 00:21:11.480 Thin Provisioning: Not Supported 00:21:11.480 Per-NS Atomic Units: No 00:21:11.480 Maximum Single Source Range Length: 128 00:21:11.480 Maximum Copy Length: 128 00:21:11.480 Maximum Source Range Count: 128 00:21:11.480 NGUID/EUI64 Never Reused: No 00:21:11.480 Namespace Write Protected: No 00:21:11.480 Number of LBA Formats: 8 00:21:11.480 Current LBA Format: LBA Format #07 00:21:11.480 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:11.480 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:11.480 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:11.480 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:11.480 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:11.480 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:11.480 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:11.480 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:11.480 00:21:11.480 NVM Specific Namespace Data 00:21:11.480 =========================== 00:21:11.480 Logical Block Storage Tag Mask: 0 00:21:11.480 Protection Information Capabilities: 00:21:11.480 16b Guard Protection Information Storage Tag Support: No 00:21:11.480 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:11.480 Storage Tag Check Read Support: No 00:21:11.480 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.480 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.480 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.480 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.480 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.480 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.480 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.480 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.480 ===================================================== 00:21:11.480 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:11.480 ===================================================== 00:21:11.480 Controller Capabilities/Features 00:21:11.480 ================================ 00:21:11.480 Vendor ID: 1b36 00:21:11.480 Subsystem Vendor ID: 1af4 00:21:11.480 Serial Number: 12341 00:21:11.480 Model Number: QEMU NVMe Ctrl 00:21:11.480 Firmware Version: 8.0.0 00:21:11.480 Recommended Arb Burst: 6 00:21:11.480 IEEE OUI Identifier: 00 54 52 00:21:11.480 Multi-path I/O 00:21:11.480 May have multiple subsystem ports: No 00:21:11.480 May have multiple controllers: No 00:21:11.480 Associated with SR-IOV VF: No 00:21:11.480 Max Data Transfer Size: 524288 00:21:11.480 Max Number of Namespaces: 256 00:21:11.480 Max Number of I/O Queues: 64 00:21:11.480 NVMe Specification Version (VS): 1.4 00:21:11.480 NVMe Specification Version (Identify): 1.4 00:21:11.480 Maximum Queue Entries: 2048 00:21:11.480 Contiguous Queues Required: Yes 00:21:11.480 Arbitration Mechanisms Supported 00:21:11.480 Weighted Round Robin: Not Supported 00:21:11.480 Vendor Specific: Not Supported 00:21:11.480 Reset Timeout: 7500 ms 00:21:11.480 Doorbell Stride: 4 bytes 00:21:11.480 NVM Subsystem Reset: Not Supported 00:21:11.480 Command Sets Supported 00:21:11.480 NVM Command Set: Supported 00:21:11.480 Boot Partition: Not Supported 00:21:11.480 Memory Page Size Minimum: 4096 bytes 00:21:11.480 Memory Page Size Maximum: 65536 bytes 00:21:11.480 Persistent Memory Region: Not Supported 00:21:11.480 Optional Asynchronous Events Supported 00:21:11.480 Namespace Attribute Notices: Supported 00:21:11.480 Firmware Activation Notices: Not Supported 00:21:11.480 ANA Change Notices: Not Supported 00:21:11.480 PLE Aggregate Log Change Notices: Not Supported 00:21:11.480 LBA Status Info Alert Notices: Not Supported 00:21:11.480 EGE Aggregate Log Change Notices: Not Supported 00:21:11.480 Normal NVM Subsystem Shutdown event: Not Supported 00:21:11.480 Zone Descriptor Change Notices: Not Supported 00:21:11.480 Discovery Log Change Notices: Not Supported 00:21:11.480 Controller Attributes 00:21:11.480 128-bit Host Identifier: Not Supported 00:21:11.480 Non-Operational Permissive Mode: Not Supported 00:21:11.480 NVM Sets: Not Supported 00:21:11.480 Read Recovery Levels: Not Supported 00:21:11.480 Endurance Groups: Not Supported 00:21:11.480 Predictable Latency Mode: Not Supported 00:21:11.480 Traffic Based Keep ALive: Not Supported 00:21:11.480 Namespace Granularity: Not Supported 00:21:11.480 SQ Associations: Not Supported 00:21:11.480 UUID List: Not Supported 00:21:11.480 Multi-Domain Subsystem: Not Supported 00:21:11.480 Fixed Capacity Management: Not Supported 00:21:11.480 Variable Capacity Management: Not Supported 00:21:11.480 Delete Endurance Group: Not Supported 00:21:11.480 Delete NVM Set: Not Supported 00:21:11.480 Extended LBA Formats Supported: Supported 00:21:11.480 Flexible Data Placement Supported: Not Supported 00:21:11.480 00:21:11.480 Controller Memory Buffer Support 00:21:11.480 ================================ 00:21:11.480 Supported: No 00:21:11.480 00:21:11.480 Persistent Memory Region Support 00:21:11.480 ================================ 00:21:11.480 Supported: No 00:21:11.480 00:21:11.480 Admin Command Set Attributes 00:21:11.480 ============================ 00:21:11.480 Security Send/Receive: Not Supported 00:21:11.480 Format NVM: Supported 00:21:11.480 Firmware Activate/Download: Not Supported 00:21:11.480 Namespace Management: Supported 00:21:11.480 Device Self-Test: Not Supported 00:21:11.480 Directives: Supported 00:21:11.480 NVMe-MI: Not Supported 00:21:11.480 Virtualization Management: Not Supported 00:21:11.480 Doorbell Buffer Config: Supported 00:21:11.480 Get LBA Status Capability: Not Supported 00:21:11.480 Command & Feature Lockdown Capability: Not Supported 00:21:11.480 Abort Command Limit: 4 00:21:11.480 Async Event Request Limit: 4 00:21:11.480 Number of Firmware Slots: N/A 00:21:11.480 Firmware Slot 1 Read-Only: N/A 00:21:11.480 Firmware Activation Without Reset: N/A 00:21:11.480 Multiple Update Detection Support: N/A 00:21:11.480 Firmware Update Granularity: No Information Provided 00:21:11.480 Per-Namespace SMART Log: Yes 00:21:11.480 Asymmetric Namespace Access Log Page: Not Supported 00:21:11.480 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:21:11.480 Command Effects Log Page: Supported 00:21:11.480 Get Log Page Extended Data: Supported 00:21:11.480 Telemetry Log Pages: Not Supported 00:21:11.480 Persistent Event Log Pages: Not Supported 00:21:11.480 Supported Log Pages Log Page: May Support 00:21:11.480 Commands Supported & Effects Log Page: Not Supported 00:21:11.480 Feature Identifiers & Effects Log Page:May Support 00:21:11.480 NVMe-MI Commands & Effects Log Page: May Support 00:21:11.480 Data Area 4 for Telemetry Log: Not Supported 00:21:11.480 Error Log Page Entries Supported: 1 00:21:11.480 Keep Alive: Not Supported 00:21:11.480 00:21:11.480 NVM Command Set Attributes 00:21:11.480 ========================== 00:21:11.480 Submission Queue Entry Size 00:21:11.480 Max: 64 00:21:11.480 Min: 64 00:21:11.480 Completion Queue Entry Size 00:21:11.480 Max: 16 00:21:11.480 Min: 16 00:21:11.480 Number of Namespaces: 256 00:21:11.480 Compare Command: Supported 00:21:11.480 Write Uncorrectable Command: Not Supported 00:21:11.480 Dataset Management Command: Supported 00:21:11.480 Write Zeroes Command: Supported 00:21:11.480 Set Features Save Field: Supported 00:21:11.480 Reservations: Not Supported 00:21:11.480 Timestamp: Supported 00:21:11.480 Copy: Supported 00:21:11.480 Volatile Write Cache: Present 00:21:11.480 Atomic Write Unit (Normal): 1 00:21:11.480 Atomic Write Unit (PFail): 1 00:21:11.480 Atomic Compare & Write Unit: 1 00:21:11.480 Fused Compare & Write: Not Supported 00:21:11.480 Scatter-Gather List 00:21:11.480 SGL Command Set: Supported 00:21:11.480 SGL Keyed: Not Supported 00:21:11.480 SGL Bit Bucket Descriptor: Not Supported 00:21:11.480 SGL Metadata Pointer: Not Supported 00:21:11.480 Oversized SGL: Not Supported 00:21:11.480 SGL Metadata Address: Not Supported 00:21:11.480 SGL Offset: Not Supported 00:21:11.480 Transport SGL Data Block: Not Supported 00:21:11.480 Replay Protected Memory Block: Not Supported 00:21:11.480 00:21:11.480 Firmware Slot Information 00:21:11.480 ========================= 00:21:11.480 Active slot: 1 00:21:11.480 Slot 1 Firmware Revision: 1.0 00:21:11.480 00:21:11.480 00:21:11.480 Commands Supported and Effects 00:21:11.480 ============================== 00:21:11.480 Admin Commands 00:21:11.480 -------------- 00:21:11.480 Delete I/O Submission Queue (00h): Supported 00:21:11.480 Create I/O Submission Queue (01h): Supported 00:21:11.480 Get Log Page (02h): Supported 00:21:11.480 Delete I/O Completion Queue (04h): Supported 00:21:11.480 Create I/O Completion Queue (05h): Supported 00:21:11.480 Identify (06h): Supported 00:21:11.480 Abort (08h): Supported 00:21:11.480 Set Features (09h): Supported 00:21:11.480 Get Features (0Ah): Supported 00:21:11.480 Asynchronous Event Request (0Ch): Supported 00:21:11.480 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:11.480 Directive Send (19h): Supported 00:21:11.480 Directive Receive (1Ah): Supported 00:21:11.480 Virtualization Management (1Ch): Supported 00:21:11.480 Doorbell Buffer Config (7Ch): Supported 00:21:11.480 Format NVM (80h): Supported LBA-Change 00:21:11.480 I/O Commands 00:21:11.480 ------------ 00:21:11.480 Flush (00h): Supported LBA-Change 00:21:11.480 Write (01h): Supported LBA-Change 00:21:11.480 Read (02h): Supported 00:21:11.480 Compare (05h): Supported 00:21:11.480 Write Zeroes (08h): Supported LBA-Change 00:21:11.480 Dataset Management (09h): Supported LBA-Change 00:21:11.480 Unknown (0Ch): Supported 00:21:11.480 Unknown (12h): Supported 00:21:11.480 Copy (19h): Supported LBA-Change 00:21:11.480 Unknown (1Dh): Supported LBA-Change 00:21:11.480 00:21:11.480 Error Log 00:21:11.480 ========= 00:21:11.480 00:21:11.480 Arbitration 00:21:11.480 =========== 00:21:11.480 Arbitration Burst: no limit 00:21:11.480 00:21:11.480 Power Management 00:21:11.480 ================ 00:21:11.480 Number of Power States: 1 00:21:11.481 Current Power State: Power State #0 00:21:11.481 Power State #0: 00:21:11.481 Max Power: 25.00 W 00:21:11.481 Non-Operational State: Operational 00:21:11.481 Entry Latency: 16 microseconds 00:21:11.481 Exit Latency: 4 microseconds 00:21:11.481 Relative Read Throughput: 0 00:21:11.481 Relative Read Latency: 0 00:21:11.481 Relative Write Throughput: 0 00:21:11.481 Relative Write Latency: 0 00:21:11.481 Idle Power: Not Reported 00:21:11.481 Active Power: Not Reported 00:21:11.481 Non-Operational Permissive Mode: Not Supported 00:21:11.481 00:21:11.481 Health Information 00:21:11.481 ================== 00:21:11.481 Critical Warnings: 00:21:11.481 Available Spare Space: OK 00:21:11.481 Temperature: OK 00:21:11.481 Device Reliability: OK 00:21:11.481 Read Only: No 00:21:11.481 Volatile Memory Backup: OK 00:21:11.481 Current Temperature: 323 Kelvin (50 Celsius) 00:21:11.481 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:11.481 Available Spare: 0% 00:21:11.481 Available Spare Threshold: 0% 00:21:11.481 Life Percentage Used: 0% 00:21:11.481 Data Units Read: 1052 00:21:11.481 Data Units Written: 916 00:21:11.481 Host Read Commands: 59002 00:21:11.481 Host Write Commands: 57750 00:21:11.481 Controller Busy Time: 0 minutes 00:21:11.481 Power Cycles: 0 00:21:11.481 Power On Hours: 0 hours 00:21:11.481 Unsafe Shutdowns: 0 00:21:11.481 Unrecoverable Media Errors: 0 00:21:11.481 Lifetime Error Log Entries: 0 00:21:11.481 Warning Temperature Time: 0 minutes 00:21:11.481 Critical Temperature Time: 0 minutes 00:21:11.481 00:21:11.481 Number of Queues 00:21:11.481 ================ 00:21:11.481 Number of I/O Submission Queues: 64 00:21:11.481 Number of I/O Completion Queues: 64 00:21:11.481 00:21:11.481 ZNS Specific Controller Data 00:21:11.481 ============================ 00:21:11.481 Zone Append Size Limit: 0 00:21:11.481 00:21:11.481 00:21:11.481 Active Namespaces 00:21:11.481 ================= 00:21:11.481 Namespace ID:1 00:21:11.481 Error Recovery Timeout: Unlimited 00:21:11.481 Command Set Identifier: NVM (00h) 00:21:11.481 Deallocate: Supported 00:21:11.481 Deallocated/Unwritten Error: Supported 00:21:11.481 Deallocated Read Value: All 0x00 00:21:11.481 Deallocate in Write Zeroes: Not Supported 00:21:11.481 Deallocated Guard Field: 0xFFFF 00:21:11.481 Flush: Supported 00:21:11.481 Reservation: Not Supported 00:21:11.481 Namespace Sharing Capabilities: Private 00:21:11.481 Size (in LBAs): 1310720 (5GiB) 00:21:11.481 Capacity (in LBAs): 1310720 (5GiB) 00:21:11.481 Utilization (in LBAs): 1310720 (5GiB) 00:21:11.481 Thin Provisioning: Not Supported 00:21:11.481 Per-NS Atomic Units: No 00:21:11.481 Maximum Single Source Range Length: 128 00:21:11.481 Maximum Copy Length: 128 00:21:11.481 Maximum Source Range Count: 128 00:21:11.481 NGUID/EUI64 Never Reused: No 00:21:11.481 Namespace Write Protected: No 00:21:11.481 Number of LBA Formats: 8 00:21:11.481 Current LBA Format: LBA Format #04 00:21:11.481 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:11.481 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:11.481 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:11.481 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:11.481 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:11.481 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:11.481 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:11.481 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:11.481 00:21:11.481 NVM Specific Namespace Data 00:21:11.481 =========================== 00:21:11.481 Logical Block Storage Tag Mask: 0 00:21:11.481 Protection Information Capabilities: 00:21:11.481 16b Guard Protection Information Storage Tag Support: No 00:21:11.481 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:11.481 Storage Tag Check Read Support: No 00:21:11.481 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.481 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.481 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.481 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.481 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.481 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.481 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.481 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.481 ===================================================== 00:21:11.481 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:11.481 ===================================================== 00:21:11.481 Controller Capabilities/Features 00:21:11.481 ================================ 00:21:11.481 Vendor ID: 1b36 00:21:11.481 Subsystem Vendor ID: 1af4 00:21:11.481 Serial Number: 12343 00:21:11.481 Model Number: QEMU NVMe Ctrl 00:21:11.481 Firmware Version: 8.0.0 00:21:11.481 Recommended Arb Burst: 6 00:21:11.481 IEEE OUI Identifier: 00 54 52 00:21:11.481 Multi-path I/O 00:21:11.481 May have multiple subsystem ports: No 00:21:11.481 May have multiple controllers: Yes 00:21:11.481 Associated with SR-IOV VF: No 00:21:11.481 Max Data Transfer Size: 524288 00:21:11.481 Max Number of Namespaces: 256 00:21:11.481 Max Number of I/O Queues: 64 00:21:11.481 NVMe Specification Version (VS): 1.4 00:21:11.481 NVMe Specification Version (Identify): 1.4 00:21:11.481 Maximum Queue Entries: 2048 00:21:11.481 Contiguous Queues Required: Yes 00:21:11.481 Arbitration Mechanisms Supported 00:21:11.481 Weighted Round Robin: Not Supported 00:21:11.481 Vendor Specific: Not Supported 00:21:11.481 Reset Timeout: 7500 ms 00:21:11.481 Doorbell Stride: 4 bytes 00:21:11.481 NVM Subsystem Reset: Not Supported 00:21:11.481 Command Sets Supported 00:21:11.481 NVM Command Set: Supported 00:21:11.481 Boot Partition: Not Supported 00:21:11.481 Memory Page Size Minimum: 4096 bytes 00:21:11.481 Memory Page Size Maximum: 65536 bytes 00:21:11.481 Persistent Memory Region: Not Supported 00:21:11.481 Optional Asynchronous Events Supported 00:21:11.481 Namespace Attribute Notices: Supported 00:21:11.481 Firmware Activation Notices: Not Supported 00:21:11.481 ANA Change Notices: Not Supported 00:21:11.481 PLE Aggregate Log Change Notices: Not Supported 00:21:11.481 LBA Status Info Alert Notices: Not Supported 00:21:11.481 EGE Aggregate Log Change Notices: Not Supported 00:21:11.481 Normal NVM Subsystem Shutdown event: Not Supported 00:21:11.481 Zone Descriptor Change Notices: Not Supported 00:21:11.481 Discovery Log Change Notices: Not Supported 00:21:11.481 Controller Attributes 00:21:11.481 128-bit Host Identifier: Not Supported 00:21:11.481 Non-Operational Permissive Mode: Not Supported 00:21:11.481 NVM Sets: Not Supported 00:21:11.481 Read Recovery Levels: Not Supported 00:21:11.481 Endurance Groups: Supported 00:21:11.481 Predictable Latency Mode: Not Supported 00:21:11.481 Traffic Based Keep ALive: Not Supported 00:21:11.481 Namespace Granularity: Not Supported 00:21:11.481 SQ Associations: Not Supported 00:21:11.481 UUID List: Not Supported 00:21:11.481 Multi-Domain Subsystem: Not Supported 00:21:11.481 Fixed Capacity Management: Not Supported 00:21:11.481 Variable Capacity Management: Not Supported 00:21:11.481 Delete Endurance Group: Not Supported 00:21:11.481 Delete NVM Set: Not Supported 00:21:11.481 Extended LBA Formats Supported: Supported 00:21:11.481 Flexible Data Placement Supported: Supported 00:21:11.481 00:21:11.481 Controller Memory Buffer Support 00:21:11.481 ================================ 00:21:11.481 Supported: No 00:21:11.481 00:21:11.481 Persistent Memory Region Support 00:21:11.481 ================================ 00:21:11.481 Supported: No 00:21:11.481 00:21:11.481 Admin Command Set Attributes 00:21:11.481 ============================ 00:21:11.481 Security Send/Receive: Not Supported 00:21:11.481 Format NVM: Supported 00:21:11.481 Firmware Activate/Download: Not Supported 00:21:11.481 Namespace Management: Supported 00:21:11.481 Device Self-Test: Not Supported 00:21:11.481 Directives: Supported 00:21:11.481 NVMe-MI: Not Supported 00:21:11.481 Virtualization Management: Not Supported 00:21:11.481 Doorbell Buffer Config: Supported 00:21:11.481 Get LBA Status Capability: Not Supported 00:21:11.481 Command & Feature Lockdown Capability: Not Supported 00:21:11.481 Abort Command Limit: 4 00:21:11.481 Async Event Request Limit: 4 00:21:11.481 Number of Firmware Slots: N/A 00:21:11.481 Firmware Slot 1 Read-Only: N/A 00:21:11.481 Firmware Activation Without Reset: N/A 00:21:11.481 Multiple Update Detection Support: N/A 00:21:11.481 Firmware Update Granularity: No Information Provided 00:21:11.481 Per-Namespace SMART Log: Yes 00:21:11.481 Asymmetric Namespace Access Log Page: Not Supported 00:21:11.481 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:21:11.481 Command Effects Log Page: Supported 00:21:11.481 Get Log Page Extended Data: Supported 00:21:11.481 Telemetry Log Pages: Not Supported 00:21:11.481 Persistent Event Log Pages: Not Supported 00:21:11.481 Supported Log Pages Log Page: May Support 00:21:11.481 Commands Supported & Effects Log Page: Not Supported 00:21:11.481 Feature Identifiers & Effects Log Page:May Support 00:21:11.481 NVMe-MI Commands & Effects Log Page: May Support 00:21:11.481 Data Area 4 for Telemetry Log: Not Supported 00:21:11.481 Error Log Page Entries Supported: 1 00:21:11.481 Keep Alive: Not Supported 00:21:11.481 00:21:11.481 NVM Command Set Attributes 00:21:11.481 ========================== 00:21:11.481 Submission Queue Entry Size 00:21:11.481 Max: 64 00:21:11.481 Min: 64 00:21:11.481 Completion Queue Entry Size 00:21:11.481 Max: 16 00:21:11.481 Min: 16 00:21:11.481 Number of Namespaces: 256 00:21:11.481 Compare Command: Supported 00:21:11.481 Write Uncorrectable Command: Not Supported 00:21:11.481 Dataset Management Command: Supported 00:21:11.481 Write Zeroes Command: Supported 00:21:11.481 Set Features Save Field: Supported 00:21:11.481 Reservations: Not Supported 00:21:11.481 Timestamp: Supported 00:21:11.481 Copy: Supported 00:21:11.481 Volatile Write Cache: Present 00:21:11.481 Atomic Write Unit (Normal): 1 00:21:11.481 Atomic Write Unit (PFail): 1 00:21:11.481 Atomic Compare & Write Unit: 1 00:21:11.481 Fused Compare & Write: Not Supported 00:21:11.481 Scatter-Gather List 00:21:11.481 SGL Command Set: Supported 00:21:11.481 SGL Keyed: Not Supported 00:21:11.481 SGL Bit Bucket Descriptor: Not Supported 00:21:11.481 SGL Metadata Pointer: Not Supported 00:21:11.481 Oversized SGL: Not Supported 00:21:11.481 SGL Metadata Address: Not Supported 00:21:11.481 SGL Offset: Not Supported 00:21:11.481 Transport SGL Data Block: Not Supported 00:21:11.481 Replay Protected Memory Block: Not Supported 00:21:11.481 00:21:11.481 Firmware Slot Information 00:21:11.481 ========================= 00:21:11.481 Active slot: 1 00:21:11.481 Slot 1 Firmware Revision: 1.0 00:21:11.481 00:21:11.481 00:21:11.481 Commands Supported and Effects 00:21:11.481 ============================== 00:21:11.481 Admin Commands 00:21:11.481 -------------- 00:21:11.481 Delete I/O Submission Queue (00h): Supported 00:21:11.481 Create I/O Submission Queue (01h): Supported 00:21:11.481 Get Log Page (02h): Supported 00:21:11.481 Delete I/O Completion Queue (04h): Supported 00:21:11.481 Create I/O Completion Queue (05h): Supported 00:21:11.481 Identify (06h): Supported 00:21:11.481 Abort (08h): Supported 00:21:11.481 Set Features (09h): Supported 00:21:11.481 Get Features (0Ah): Supported 00:21:11.481 Asynchronous Event Request (0Ch): Supported 00:21:11.481 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:11.481 Directive Send (19h): Supported 00:21:11.481 Directive Receive (1Ah): Supported 00:21:11.481 Virtualization Management (1Ch): Supported 00:21:11.481 Doorbell Buffer Config (7Ch): Supported 00:21:11.481 Format NVM (80h): Supported LBA-Change 00:21:11.481 I/O Commands 00:21:11.481 ------------ 00:21:11.481 Flush (00h): Supported LBA-Change 00:21:11.481 Write (01h): Supported LBA-Change 00:21:11.481 Read (02h): Supported 00:21:11.481 Compare (05h): Supported 00:21:11.481 Write Zeroes (08h): Supported LBA-Change 00:21:11.481 Dataset Management (09h): Supported LBA-Change 00:21:11.481 Unknown (0Ch): Supported 00:21:11.481 Unknown (12h): Supported 00:21:11.481 Copy (19h): Supported LBA-Change 00:21:11.481 Unknown (1Dh): Supported LBA-Change 00:21:11.481 00:21:11.481 Error Log 00:21:11.482 ========= 00:21:11.482 00:21:11.482 Arbitration 00:21:11.482 =========== 00:21:11.482 Arbitration Burst: no limit 00:21:11.482 00:21:11.482 Power Management 00:21:11.482 ================ 00:21:11.482 Number of Power States: 1 00:21:11.482 Current Power State: Power State #0 00:21:11.482 Power State #0: 00:21:11.482 Max Power: 25.00 W 00:21:11.482 Non-Operational State: Operational 00:21:11.482 Entry Latency: 16 microseconds 00:21:11.482 Exit Latency: 4 microseconds 00:21:11.482 Relative Read Throughput: 0 00:21:11.482 Relative Read Latency: 0 00:21:11.482 Relative Write Throughput: 0 00:21:11.482 Relative Write Latency: 0 00:21:11.482 Idle Power: Not Reported 00:21:11.482 Active Power: Not Reported 00:21:11.482 Non-Operational Permissive Mode: Not Supported 00:21:11.482 00:21:11.482 Health Information 00:21:11.482 ================== 00:21:11.482 Critical Warnings: 00:21:11.482 Available Spare Space: OK 00:21:11.482 Temperature: OK 00:21:11.482 Device Reliability: OK 00:21:11.482 Read Only: No 00:21:11.482 Volatile Memory Backup: OK 00:21:11.482 Current Temperature: 323 Kelvin (50 Celsius) 00:21:11.482 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:11.482 Available Spare: 0% 00:21:11.482 Available Spare Threshold: 0% 00:21:11.482 Life Percentage Used: [2024-10-01 20:19:06.438455] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 63526 terminated unexpected 00:21:11.482 [2024-10-01 20:19:06.439333] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 63526 terminated unexpected 00:21:11.482 [2024-10-01 20:19:06.439850] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 63526 terminated unexpected 00:21:11.482 [2024-10-01 20:19:06.440584] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 63526 terminated unexpected 00:21:11.482 0% 00:21:11.482 Data Units Read: 880 00:21:11.482 Data Units Written: 809 00:21:11.482 Host Read Commands: 41809 00:21:11.482 Host Write Commands: 41233 00:21:11.482 Controller Busy Time: 0 minutes 00:21:11.482 Power Cycles: 0 00:21:11.482 Power On Hours: 0 hours 00:21:11.482 Unsafe Shutdowns: 0 00:21:11.482 Unrecoverable Media Errors: 0 00:21:11.482 Lifetime Error Log Entries: 0 00:21:11.482 Warning Temperature Time: 0 minutes 00:21:11.482 Critical Temperature Time: 0 minutes 00:21:11.482 00:21:11.482 Number of Queues 00:21:11.482 ================ 00:21:11.482 Number of I/O Submission Queues: 64 00:21:11.482 Number of I/O Completion Queues: 64 00:21:11.482 00:21:11.482 ZNS Specific Controller Data 00:21:11.482 ============================ 00:21:11.482 Zone Append Size Limit: 0 00:21:11.482 00:21:11.482 00:21:11.482 Active Namespaces 00:21:11.482 ================= 00:21:11.482 Namespace ID:1 00:21:11.482 Error Recovery Timeout: Unlimited 00:21:11.482 Command Set Identifier: NVM (00h) 00:21:11.482 Deallocate: Supported 00:21:11.482 Deallocated/Unwritten Error: Supported 00:21:11.482 Deallocated Read Value: All 0x00 00:21:11.482 Deallocate in Write Zeroes: Not Supported 00:21:11.482 Deallocated Guard Field: 0xFFFF 00:21:11.482 Flush: Supported 00:21:11.482 Reservation: Not Supported 00:21:11.482 Namespace Sharing Capabilities: Multiple Controllers 00:21:11.482 Size (in LBAs): 262144 (1GiB) 00:21:11.482 Capacity (in LBAs): 262144 (1GiB) 00:21:11.482 Utilization (in LBAs): 262144 (1GiB) 00:21:11.482 Thin Provisioning: Not Supported 00:21:11.482 Per-NS Atomic Units: No 00:21:11.482 Maximum Single Source Range Length: 128 00:21:11.482 Maximum Copy Length: 128 00:21:11.482 Maximum Source Range Count: 128 00:21:11.482 NGUID/EUI64 Never Reused: No 00:21:11.482 Namespace Write Protected: No 00:21:11.482 Endurance group ID: 1 00:21:11.482 Number of LBA Formats: 8 00:21:11.482 Current LBA Format: LBA Format #04 00:21:11.482 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:11.482 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:11.482 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:11.482 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:11.482 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:11.482 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:11.482 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:11.482 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:11.482 00:21:11.482 Get Feature FDP: 00:21:11.482 ================ 00:21:11.482 Enabled: Yes 00:21:11.482 FDP configuration index: 0 00:21:11.482 00:21:11.482 FDP configurations log page 00:21:11.482 =========================== 00:21:11.482 Number of FDP configurations: 1 00:21:11.482 Version: 0 00:21:11.482 Size: 112 00:21:11.482 FDP Configuration Descriptor: 0 00:21:11.482 Descriptor Size: 96 00:21:11.482 Reclaim Group Identifier format: 2 00:21:11.482 FDP Volatile Write Cache: Not Present 00:21:11.482 FDP Configuration: Valid 00:21:11.482 Vendor Specific Size: 0 00:21:11.482 Number of Reclaim Groups: 2 00:21:11.482 Number of Recalim Unit Handles: 8 00:21:11.482 Max Placement Identifiers: 128 00:21:11.482 Number of Namespaces Suppprted: 256 00:21:11.482 Reclaim unit Nominal Size: 6000000 bytes 00:21:11.482 Estimated Reclaim Unit Time Limit: Not Reported 00:21:11.482 RUH Desc #000: RUH Type: Initially Isolated 00:21:11.482 RUH Desc #001: RUH Type: Initially Isolated 00:21:11.482 RUH Desc #002: RUH Type: Initially Isolated 00:21:11.482 RUH Desc #003: RUH Type: Initially Isolated 00:21:11.482 RUH Desc #004: RUH Type: Initially Isolated 00:21:11.482 RUH Desc #005: RUH Type: Initially Isolated 00:21:11.482 RUH Desc #006: RUH Type: Initially Isolated 00:21:11.482 RUH Desc #007: RUH Type: Initially Isolated 00:21:11.482 00:21:11.482 FDP reclaim unit handle usage log page 00:21:11.482 ====================================== 00:21:11.482 Number of Reclaim Unit Handles: 8 00:21:11.482 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:21:11.482 RUH Usage Desc #001: RUH Attributes: Unused 00:21:11.482 RUH Usage Desc #002: RUH Attributes: Unused 00:21:11.482 RUH Usage Desc #003: RUH Attributes: Unused 00:21:11.482 RUH Usage Desc #004: RUH Attributes: Unused 00:21:11.482 RUH Usage Desc #005: RUH Attributes: Unused 00:21:11.482 RUH Usage Desc #006: RUH Attributes: Unused 00:21:11.482 RUH Usage Desc #007: RUH Attributes: Unused 00:21:11.482 00:21:11.482 FDP statistics log page 00:21:11.482 ======================= 00:21:11.482 Host bytes with metadata written: 513646592 00:21:11.482 Media bytes with metadata written: 513703936 00:21:11.482 Media bytes erased: 0 00:21:11.482 00:21:11.482 FDP events log page 00:21:11.482 =================== 00:21:11.482 Number of FDP events: 0 00:21:11.482 00:21:11.482 NVM Specific Namespace Data 00:21:11.482 =========================== 00:21:11.482 Logical Block Storage Tag Mask: 0 00:21:11.482 Protection Information Capabilities: 00:21:11.482 16b Guard Protection Information Storage Tag Support: No 00:21:11.482 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:11.482 Storage Tag Check Read Support: No 00:21:11.482 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.482 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.482 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.482 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.482 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.482 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.482 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.482 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.482 ===================================================== 00:21:11.482 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:11.482 ===================================================== 00:21:11.482 Controller Capabilities/Features 00:21:11.482 ================================ 00:21:11.482 Vendor ID: 1b36 00:21:11.482 Subsystem Vendor ID: 1af4 00:21:11.482 Serial Number: 12342 00:21:11.482 Model Number: QEMU NVMe Ctrl 00:21:11.482 Firmware Version: 8.0.0 00:21:11.482 Recommended Arb Burst: 6 00:21:11.482 IEEE OUI Identifier: 00 54 52 00:21:11.482 Multi-path I/O 00:21:11.482 May have multiple subsystem ports: No 00:21:11.482 May have multiple controllers: No 00:21:11.482 Associated with SR-IOV VF: No 00:21:11.482 Max Data Transfer Size: 524288 00:21:11.482 Max Number of Namespaces: 256 00:21:11.482 Max Number of I/O Queues: 64 00:21:11.482 NVMe Specification Version (VS): 1.4 00:21:11.482 NVMe Specification Version (Identify): 1.4 00:21:11.482 Maximum Queue Entries: 2048 00:21:11.482 Contiguous Queues Required: Yes 00:21:11.482 Arbitration Mechanisms Supported 00:21:11.482 Weighted Round Robin: Not Supported 00:21:11.482 Vendor Specific: Not Supported 00:21:11.482 Reset Timeout: 7500 ms 00:21:11.482 Doorbell Stride: 4 bytes 00:21:11.482 NVM Subsystem Reset: Not Supported 00:21:11.482 Command Sets Supported 00:21:11.482 NVM Command Set: Supported 00:21:11.482 Boot Partition: Not Supported 00:21:11.482 Memory Page Size Minimum: 4096 bytes 00:21:11.482 Memory Page Size Maximum: 65536 bytes 00:21:11.482 Persistent Memory Region: Not Supported 00:21:11.482 Optional Asynchronous Events Supported 00:21:11.482 Namespace Attribute Notices: Supported 00:21:11.482 Firmware Activation Notices: Not Supported 00:21:11.482 ANA Change Notices: Not Supported 00:21:11.482 PLE Aggregate Log Change Notices: Not Supported 00:21:11.482 LBA Status Info Alert Notices: Not Supported 00:21:11.482 EGE Aggregate Log Change Notices: Not Supported 00:21:11.482 Normal NVM Subsystem Shutdown event: Not Supported 00:21:11.483 Zone Descriptor Change Notices: Not Supported 00:21:11.483 Discovery Log Change Notices: Not Supported 00:21:11.483 Controller Attributes 00:21:11.483 128-bit Host Identifier: Not Supported 00:21:11.483 Non-Operational Permissive Mode: Not Supported 00:21:11.483 NVM Sets: Not Supported 00:21:11.483 Read Recovery Levels: Not Supported 00:21:11.483 Endurance Groups: Not Supported 00:21:11.483 Predictable Latency Mode: Not Supported 00:21:11.483 Traffic Based Keep ALive: Not Supported 00:21:11.483 Namespace Granularity: Not Supported 00:21:11.483 SQ Associations: Not Supported 00:21:11.483 UUID List: Not Supported 00:21:11.483 Multi-Domain Subsystem: Not Supported 00:21:11.483 Fixed Capacity Management: Not Supported 00:21:11.483 Variable Capacity Management: Not Supported 00:21:11.483 Delete Endurance Group: Not Supported 00:21:11.483 Delete NVM Set: Not Supported 00:21:11.483 Extended LBA Formats Supported: Supported 00:21:11.483 Flexible Data Placement Supported: Not Supported 00:21:11.483 00:21:11.483 Controller Memory Buffer Support 00:21:11.483 ================================ 00:21:11.483 Supported: No 00:21:11.483 00:21:11.483 Persistent Memory Region Support 00:21:11.483 ================================ 00:21:11.483 Supported: No 00:21:11.483 00:21:11.483 Admin Command Set Attributes 00:21:11.483 ============================ 00:21:11.483 Security Send/Receive: Not Supported 00:21:11.483 Format NVM: Supported 00:21:11.483 Firmware Activate/Download: Not Supported 00:21:11.483 Namespace Management: Supported 00:21:11.483 Device Self-Test: Not Supported 00:21:11.483 Directives: Supported 00:21:11.483 NVMe-MI: Not Supported 00:21:11.483 Virtualization Management: Not Supported 00:21:11.483 Doorbell Buffer Config: Supported 00:21:11.483 Get LBA Status Capability: Not Supported 00:21:11.483 Command & Feature Lockdown Capability: Not Supported 00:21:11.483 Abort Command Limit: 4 00:21:11.483 Async Event Request Limit: 4 00:21:11.483 Number of Firmware Slots: N/A 00:21:11.483 Firmware Slot 1 Read-Only: N/A 00:21:11.483 Firmware Activation Without Reset: N/A 00:21:11.483 Multiple Update Detection Support: N/A 00:21:11.483 Firmware Update Granularity: No Information Provided 00:21:11.483 Per-Namespace SMART Log: Yes 00:21:11.483 Asymmetric Namespace Access Log Page: Not Supported 00:21:11.483 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:21:11.483 Command Effects Log Page: Supported 00:21:11.483 Get Log Page Extended Data: Supported 00:21:11.483 Telemetry Log Pages: Not Supported 00:21:11.483 Persistent Event Log Pages: Not Supported 00:21:11.483 Supported Log Pages Log Page: May Support 00:21:11.483 Commands Supported & Effects Log Page: Not Supported 00:21:11.483 Feature Identifiers & Effects Log Page:May Support 00:21:11.483 NVMe-MI Commands & Effects Log Page: May Support 00:21:11.483 Data Area 4 for Telemetry Log: Not Supported 00:21:11.483 Error Log Page Entries Supported: 1 00:21:11.483 Keep Alive: Not Supported 00:21:11.483 00:21:11.483 NVM Command Set Attributes 00:21:11.483 ========================== 00:21:11.483 Submission Queue Entry Size 00:21:11.483 Max: 64 00:21:11.483 Min: 64 00:21:11.483 Completion Queue Entry Size 00:21:11.483 Max: 16 00:21:11.483 Min: 16 00:21:11.483 Number of Namespaces: 256 00:21:11.483 Compare Command: Supported 00:21:11.483 Write Uncorrectable Command: Not Supported 00:21:11.483 Dataset Management Command: Supported 00:21:11.483 Write Zeroes Command: Supported 00:21:11.483 Set Features Save Field: Supported 00:21:11.483 Reservations: Not Supported 00:21:11.483 Timestamp: Supported 00:21:11.483 Copy: Supported 00:21:11.483 Volatile Write Cache: Present 00:21:11.483 Atomic Write Unit (Normal): 1 00:21:11.483 Atomic Write Unit (PFail): 1 00:21:11.483 Atomic Compare & Write Unit: 1 00:21:11.483 Fused Compare & Write: Not Supported 00:21:11.483 Scatter-Gather List 00:21:11.483 SGL Command Set: Supported 00:21:11.483 SGL Keyed: Not Supported 00:21:11.483 SGL Bit Bucket Descriptor: Not Supported 00:21:11.483 SGL Metadata Pointer: Not Supported 00:21:11.483 Oversized SGL: Not Supported 00:21:11.483 SGL Metadata Address: Not Supported 00:21:11.483 SGL Offset: Not Supported 00:21:11.483 Transport SGL Data Block: Not Supported 00:21:11.483 Replay Protected Memory Block: Not Supported 00:21:11.483 00:21:11.483 Firmware Slot Information 00:21:11.483 ========================= 00:21:11.483 Active slot: 1 00:21:11.483 Slot 1 Firmware Revision: 1.0 00:21:11.483 00:21:11.483 00:21:11.483 Commands Supported and Effects 00:21:11.483 ============================== 00:21:11.483 Admin Commands 00:21:11.483 -------------- 00:21:11.483 Delete I/O Submission Queue (00h): Supported 00:21:11.483 Create I/O Submission Queue (01h): Supported 00:21:11.483 Get Log Page (02h): Supported 00:21:11.483 Delete I/O Completion Queue (04h): Supported 00:21:11.483 Create I/O Completion Queue (05h): Supported 00:21:11.483 Identify (06h): Supported 00:21:11.483 Abort (08h): Supported 00:21:11.483 Set Features (09h): Supported 00:21:11.483 Get Features (0Ah): Supported 00:21:11.483 Asynchronous Event Request (0Ch): Supported 00:21:11.483 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:11.483 Directive Send (19h): Supported 00:21:11.483 Directive Receive (1Ah): Supported 00:21:11.483 Virtualization Management (1Ch): Supported 00:21:11.483 Doorbell Buffer Config (7Ch): Supported 00:21:11.483 Format NVM (80h): Supported LBA-Change 00:21:11.483 I/O Commands 00:21:11.483 ------------ 00:21:11.483 Flush (00h): Supported LBA-Change 00:21:11.483 Write (01h): Supported LBA-Change 00:21:11.483 Read (02h): Supported 00:21:11.483 Compare (05h): Supported 00:21:11.483 Write Zeroes (08h): Supported LBA-Change 00:21:11.483 Dataset Management (09h): Supported LBA-Change 00:21:11.483 Unknown (0Ch): Supported 00:21:11.483 Unknown (12h): Supported 00:21:11.483 Copy (19h): Supported LBA-Change 00:21:11.483 Unknown (1Dh): Supported LBA-Change 00:21:11.483 00:21:11.483 Error Log 00:21:11.483 ========= 00:21:11.483 00:21:11.483 Arbitration 00:21:11.483 =========== 00:21:11.483 Arbitration Burst: no limit 00:21:11.483 00:21:11.483 Power Management 00:21:11.483 ================ 00:21:11.483 Number of Power States: 1 00:21:11.483 Current Power State: Power State #0 00:21:11.483 Power State #0: 00:21:11.483 Max Power: 25.00 W 00:21:11.483 Non-Operational State: Operational 00:21:11.483 Entry Latency: 16 microseconds 00:21:11.483 Exit Latency: 4 microseconds 00:21:11.483 Relative Read Throughput: 0 00:21:11.483 Relative Read Latency: 0 00:21:11.483 Relative Write Throughput: 0 00:21:11.483 Relative Write Latency: 0 00:21:11.483 Idle Power: Not Reported 00:21:11.483 Active Power: Not Reported 00:21:11.483 Non-Operational Permissive Mode: Not Supported 00:21:11.483 00:21:11.483 Health Information 00:21:11.483 ================== 00:21:11.483 Critical Warnings: 00:21:11.483 Available Spare Space: OK 00:21:11.483 Temperature: OK 00:21:11.483 Device Reliability: OK 00:21:11.483 Read Only: No 00:21:11.483 Volatile Memory Backup: OK 00:21:11.483 Current Temperature: 323 Kelvin (50 Celsius) 00:21:11.483 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:11.483 Available Spare: 0% 00:21:11.483 Available Spare Threshold: 0% 00:21:11.483 Life Percentage Used: 0% 00:21:11.483 Data Units Read: 2279 00:21:11.483 Data Units Written: 2066 00:21:11.483 Host Read Commands: 122142 00:21:11.483 Host Write Commands: 120411 00:21:11.483 Controller Busy Time: 0 minutes 00:21:11.483 Power Cycles: 0 00:21:11.483 Power On Hours: 0 hours 00:21:11.483 Unsafe Shutdowns: 0 00:21:11.483 Unrecoverable Media Errors: 0 00:21:11.483 Lifetime Error Log Entries: 0 00:21:11.483 Warning Temperature Time: 0 minutes 00:21:11.483 Critical Temperature Time: 0 minutes 00:21:11.483 00:21:11.483 Number of Queues 00:21:11.483 ================ 00:21:11.483 Number of I/O Submission Queues: 64 00:21:11.483 Number of I/O Completion Queues: 64 00:21:11.483 00:21:11.483 ZNS Specific Controller Data 00:21:11.483 ============================ 00:21:11.483 Zone Append Size Limit: 0 00:21:11.483 00:21:11.483 00:21:11.483 Active Namespaces 00:21:11.483 ================= 00:21:11.483 Namespace ID:1 00:21:11.483 Error Recovery Timeout: Unlimited 00:21:11.483 Command Set Identifier: NVM (00h) 00:21:11.483 Deallocate: Supported 00:21:11.483 Deallocated/Unwritten Error: Supported 00:21:11.483 Deallocated Read Value: All 0x00 00:21:11.483 Deallocate in Write Zeroes: Not Supported 00:21:11.483 Deallocated Guard Field: 0xFFFF 00:21:11.483 Flush: Supported 00:21:11.483 Reservation: Not Supported 00:21:11.483 Namespace Sharing Capabilities: Private 00:21:11.483 Size (in LBAs): 1048576 (4GiB) 00:21:11.483 Capacity (in LBAs): 1048576 (4GiB) 00:21:11.483 Utilization (in LBAs): 1048576 (4GiB) 00:21:11.483 Thin Provisioning: Not Supported 00:21:11.483 Per-NS Atomic Units: No 00:21:11.483 Maximum Single Source Range Length: 128 00:21:11.483 Maximum Copy Length: 128 00:21:11.483 Maximum Source Range Count: 128 00:21:11.483 NGUID/EUI64 Never Reused: No 00:21:11.483 Namespace Write Protected: No 00:21:11.483 Number of LBA Formats: 8 00:21:11.483 Current LBA Format: LBA Format #04 00:21:11.484 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:11.484 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:11.484 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:11.484 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:11.484 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:11.484 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:11.484 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:11.484 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:11.484 00:21:11.484 NVM Specific Namespace Data 00:21:11.484 =========================== 00:21:11.484 Logical Block Storage Tag Mask: 0 00:21:11.484 Protection Information Capabilities: 00:21:11.484 16b Guard Protection Information Storage Tag Support: No 00:21:11.484 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:11.484 Storage Tag Check Read Support: No 00:21:11.484 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Namespace ID:2 00:21:11.484 Error Recovery Timeout: Unlimited 00:21:11.484 Command Set Identifier: NVM (00h) 00:21:11.484 Deallocate: Supported 00:21:11.484 Deallocated/Unwritten Error: Supported 00:21:11.484 Deallocated Read Value: All 0x00 00:21:11.484 Deallocate in Write Zeroes: Not Supported 00:21:11.484 Deallocated Guard Field: 0xFFFF 00:21:11.484 Flush: Supported 00:21:11.484 Reservation: Not Supported 00:21:11.484 Namespace Sharing Capabilities: Private 00:21:11.484 Size (in LBAs): 1048576 (4GiB) 00:21:11.484 Capacity (in LBAs): 1048576 (4GiB) 00:21:11.484 Utilization (in LBAs): 1048576 (4GiB) 00:21:11.484 Thin Provisioning: Not Supported 00:21:11.484 Per-NS Atomic Units: No 00:21:11.484 Maximum Single Source Range Length: 128 00:21:11.484 Maximum Copy Length: 128 00:21:11.484 Maximum Source Range Count: 128 00:21:11.484 NGUID/EUI64 Never Reused: No 00:21:11.484 Namespace Write Protected: No 00:21:11.484 Number of LBA Formats: 8 00:21:11.484 Current LBA Format: LBA Format #04 00:21:11.484 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:11.484 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:11.484 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:11.484 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:11.484 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:11.484 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:11.484 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:11.484 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:11.484 00:21:11.484 NVM Specific Namespace Data 00:21:11.484 =========================== 00:21:11.484 Logical Block Storage Tag Mask: 0 00:21:11.484 Protection Information Capabilities: 00:21:11.484 16b Guard Protection Information Storage Tag Support: No 00:21:11.484 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:11.484 Storage Tag Check Read Support: No 00:21:11.484 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Namespace ID:3 00:21:11.484 Error Recovery Timeout: Unlimited 00:21:11.484 Command Set Identifier: NVM (00h) 00:21:11.484 Deallocate: Supported 00:21:11.484 Deallocated/Unwritten Error: Supported 00:21:11.484 Deallocated Read Value: All 0x00 00:21:11.484 Deallocate in Write Zeroes: Not Supported 00:21:11.484 Deallocated Guard Field: 0xFFFF 00:21:11.484 Flush: Supported 00:21:11.484 Reservation: Not Supported 00:21:11.484 Namespace Sharing Capabilities: Private 00:21:11.484 Size (in LBAs): 1048576 (4GiB) 00:21:11.484 Capacity (in LBAs): 1048576 (4GiB) 00:21:11.484 Utilization (in LBAs): 1048576 (4GiB) 00:21:11.484 Thin Provisioning: Not Supported 00:21:11.484 Per-NS Atomic Units: No 00:21:11.484 Maximum Single Source Range Length: 128 00:21:11.484 Maximum Copy Length: 128 00:21:11.484 Maximum Source Range Count: 128 00:21:11.484 NGUID/EUI64 Never Reused: No 00:21:11.484 Namespace Write Protected: No 00:21:11.484 Number of LBA Formats: 8 00:21:11.484 Current LBA Format: LBA Format #04 00:21:11.484 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:11.484 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:11.484 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:11.484 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:11.484 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:11.484 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:11.484 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:11.484 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:11.484 00:21:11.484 NVM Specific Namespace Data 00:21:11.484 =========================== 00:21:11.484 Logical Block Storage Tag Mask: 0 00:21:11.484 Protection Information Capabilities: 00:21:11.484 16b Guard Protection Information Storage Tag Support: No 00:21:11.484 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:11.484 Storage Tag Check Read Support: No 00:21:11.484 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.484 20:19:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:21:11.484 20:19:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:21:11.484 ===================================================== 00:21:11.484 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:11.484 ===================================================== 00:21:11.484 Controller Capabilities/Features 00:21:11.484 ================================ 00:21:11.484 Vendor ID: 1b36 00:21:11.484 Subsystem Vendor ID: 1af4 00:21:11.484 Serial Number: 12340 00:21:11.484 Model Number: QEMU NVMe Ctrl 00:21:11.484 Firmware Version: 8.0.0 00:21:11.484 Recommended Arb Burst: 6 00:21:11.484 IEEE OUI Identifier: 00 54 52 00:21:11.484 Multi-path I/O 00:21:11.484 May have multiple subsystem ports: No 00:21:11.484 May have multiple controllers: No 00:21:11.484 Associated with SR-IOV VF: No 00:21:11.484 Max Data Transfer Size: 524288 00:21:11.484 Max Number of Namespaces: 256 00:21:11.484 Max Number of I/O Queues: 64 00:21:11.484 NVMe Specification Version (VS): 1.4 00:21:11.484 NVMe Specification Version (Identify): 1.4 00:21:11.484 Maximum Queue Entries: 2048 00:21:11.484 Contiguous Queues Required: Yes 00:21:11.484 Arbitration Mechanisms Supported 00:21:11.484 Weighted Round Robin: Not Supported 00:21:11.484 Vendor Specific: Not Supported 00:21:11.484 Reset Timeout: 7500 ms 00:21:11.484 Doorbell Stride: 4 bytes 00:21:11.484 NVM Subsystem Reset: Not Supported 00:21:11.484 Command Sets Supported 00:21:11.484 NVM Command Set: Supported 00:21:11.485 Boot Partition: Not Supported 00:21:11.485 Memory Page Size Minimum: 4096 bytes 00:21:11.485 Memory Page Size Maximum: 65536 bytes 00:21:11.485 Persistent Memory Region: Not Supported 00:21:11.485 Optional Asynchronous Events Supported 00:21:11.485 Namespace Attribute Notices: Supported 00:21:11.485 Firmware Activation Notices: Not Supported 00:21:11.485 ANA Change Notices: Not Supported 00:21:11.485 PLE Aggregate Log Change Notices: Not Supported 00:21:11.485 LBA Status Info Alert Notices: Not Supported 00:21:11.485 EGE Aggregate Log Change Notices: Not Supported 00:21:11.485 Normal NVM Subsystem Shutdown event: Not Supported 00:21:11.485 Zone Descriptor Change Notices: Not Supported 00:21:11.485 Discovery Log Change Notices: Not Supported 00:21:11.485 Controller Attributes 00:21:11.485 128-bit Host Identifier: Not Supported 00:21:11.485 Non-Operational Permissive Mode: Not Supported 00:21:11.485 NVM Sets: Not Supported 00:21:11.485 Read Recovery Levels: Not Supported 00:21:11.485 Endurance Groups: Not Supported 00:21:11.485 Predictable Latency Mode: Not Supported 00:21:11.485 Traffic Based Keep ALive: Not Supported 00:21:11.485 Namespace Granularity: Not Supported 00:21:11.485 SQ Associations: Not Supported 00:21:11.485 UUID List: Not Supported 00:21:11.485 Multi-Domain Subsystem: Not Supported 00:21:11.485 Fixed Capacity Management: Not Supported 00:21:11.485 Variable Capacity Management: Not Supported 00:21:11.485 Delete Endurance Group: Not Supported 00:21:11.485 Delete NVM Set: Not Supported 00:21:11.485 Extended LBA Formats Supported: Supported 00:21:11.485 Flexible Data Placement Supported: Not Supported 00:21:11.485 00:21:11.485 Controller Memory Buffer Support 00:21:11.485 ================================ 00:21:11.485 Supported: No 00:21:11.485 00:21:11.485 Persistent Memory Region Support 00:21:11.485 ================================ 00:21:11.485 Supported: No 00:21:11.485 00:21:11.485 Admin Command Set Attributes 00:21:11.485 ============================ 00:21:11.485 Security Send/Receive: Not Supported 00:21:11.485 Format NVM: Supported 00:21:11.485 Firmware Activate/Download: Not Supported 00:21:11.485 Namespace Management: Supported 00:21:11.485 Device Self-Test: Not Supported 00:21:11.485 Directives: Supported 00:21:11.485 NVMe-MI: Not Supported 00:21:11.485 Virtualization Management: Not Supported 00:21:11.485 Doorbell Buffer Config: Supported 00:21:11.485 Get LBA Status Capability: Not Supported 00:21:11.485 Command & Feature Lockdown Capability: Not Supported 00:21:11.485 Abort Command Limit: 4 00:21:11.485 Async Event Request Limit: 4 00:21:11.485 Number of Firmware Slots: N/A 00:21:11.485 Firmware Slot 1 Read-Only: N/A 00:21:11.485 Firmware Activation Without Reset: N/A 00:21:11.485 Multiple Update Detection Support: N/A 00:21:11.485 Firmware Update Granularity: No Information Provided 00:21:11.485 Per-Namespace SMART Log: Yes 00:21:11.485 Asymmetric Namespace Access Log Page: Not Supported 00:21:11.485 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:21:11.485 Command Effects Log Page: Supported 00:21:11.485 Get Log Page Extended Data: Supported 00:21:11.485 Telemetry Log Pages: Not Supported 00:21:11.485 Persistent Event Log Pages: Not Supported 00:21:11.485 Supported Log Pages Log Page: May Support 00:21:11.485 Commands Supported & Effects Log Page: Not Supported 00:21:11.485 Feature Identifiers & Effects Log Page:May Support 00:21:11.485 NVMe-MI Commands & Effects Log Page: May Support 00:21:11.485 Data Area 4 for Telemetry Log: Not Supported 00:21:11.485 Error Log Page Entries Supported: 1 00:21:11.485 Keep Alive: Not Supported 00:21:11.485 00:21:11.485 NVM Command Set Attributes 00:21:11.485 ========================== 00:21:11.485 Submission Queue Entry Size 00:21:11.485 Max: 64 00:21:11.485 Min: 64 00:21:11.485 Completion Queue Entry Size 00:21:11.485 Max: 16 00:21:11.485 Min: 16 00:21:11.485 Number of Namespaces: 256 00:21:11.485 Compare Command: Supported 00:21:11.485 Write Uncorrectable Command: Not Supported 00:21:11.485 Dataset Management Command: Supported 00:21:11.485 Write Zeroes Command: Supported 00:21:11.485 Set Features Save Field: Supported 00:21:11.485 Reservations: Not Supported 00:21:11.485 Timestamp: Supported 00:21:11.485 Copy: Supported 00:21:11.485 Volatile Write Cache: Present 00:21:11.485 Atomic Write Unit (Normal): 1 00:21:11.485 Atomic Write Unit (PFail): 1 00:21:11.485 Atomic Compare & Write Unit: 1 00:21:11.485 Fused Compare & Write: Not Supported 00:21:11.485 Scatter-Gather List 00:21:11.485 SGL Command Set: Supported 00:21:11.485 SGL Keyed: Not Supported 00:21:11.485 SGL Bit Bucket Descriptor: Not Supported 00:21:11.485 SGL Metadata Pointer: Not Supported 00:21:11.485 Oversized SGL: Not Supported 00:21:11.485 SGL Metadata Address: Not Supported 00:21:11.485 SGL Offset: Not Supported 00:21:11.485 Transport SGL Data Block: Not Supported 00:21:11.485 Replay Protected Memory Block: Not Supported 00:21:11.485 00:21:11.485 Firmware Slot Information 00:21:11.485 ========================= 00:21:11.485 Active slot: 1 00:21:11.485 Slot 1 Firmware Revision: 1.0 00:21:11.485 00:21:11.485 00:21:11.485 Commands Supported and Effects 00:21:11.485 ============================== 00:21:11.485 Admin Commands 00:21:11.485 -------------- 00:21:11.485 Delete I/O Submission Queue (00h): Supported 00:21:11.485 Create I/O Submission Queue (01h): Supported 00:21:11.485 Get Log Page (02h): Supported 00:21:11.485 Delete I/O Completion Queue (04h): Supported 00:21:11.485 Create I/O Completion Queue (05h): Supported 00:21:11.485 Identify (06h): Supported 00:21:11.485 Abort (08h): Supported 00:21:11.485 Set Features (09h): Supported 00:21:11.485 Get Features (0Ah): Supported 00:21:11.485 Asynchronous Event Request (0Ch): Supported 00:21:11.485 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:11.485 Directive Send (19h): Supported 00:21:11.485 Directive Receive (1Ah): Supported 00:21:11.485 Virtualization Management (1Ch): Supported 00:21:11.485 Doorbell Buffer Config (7Ch): Supported 00:21:11.485 Format NVM (80h): Supported LBA-Change 00:21:11.485 I/O Commands 00:21:11.485 ------------ 00:21:11.485 Flush (00h): Supported LBA-Change 00:21:11.485 Write (01h): Supported LBA-Change 00:21:11.485 Read (02h): Supported 00:21:11.485 Compare (05h): Supported 00:21:11.485 Write Zeroes (08h): Supported LBA-Change 00:21:11.485 Dataset Management (09h): Supported LBA-Change 00:21:11.485 Unknown (0Ch): Supported 00:21:11.485 Unknown (12h): Supported 00:21:11.485 Copy (19h): Supported LBA-Change 00:21:11.485 Unknown (1Dh): Supported LBA-Change 00:21:11.485 00:21:11.485 Error Log 00:21:11.485 ========= 00:21:11.485 00:21:11.485 Arbitration 00:21:11.485 =========== 00:21:11.485 Arbitration Burst: no limit 00:21:11.485 00:21:11.485 Power Management 00:21:11.485 ================ 00:21:11.485 Number of Power States: 1 00:21:11.485 Current Power State: Power State #0 00:21:11.485 Power State #0: 00:21:11.485 Max Power: 25.00 W 00:21:11.485 Non-Operational State: Operational 00:21:11.485 Entry Latency: 16 microseconds 00:21:11.485 Exit Latency: 4 microseconds 00:21:11.485 Relative Read Throughput: 0 00:21:11.485 Relative Read Latency: 0 00:21:11.485 Relative Write Throughput: 0 00:21:11.485 Relative Write Latency: 0 00:21:11.485 Idle Power: Not Reported 00:21:11.485 Active Power: Not Reported 00:21:11.485 Non-Operational Permissive Mode: Not Supported 00:21:11.485 00:21:11.485 Health Information 00:21:11.485 ================== 00:21:11.485 Critical Warnings: 00:21:11.486 Available Spare Space: OK 00:21:11.486 Temperature: OK 00:21:11.486 Device Reliability: OK 00:21:11.486 Read Only: No 00:21:11.486 Volatile Memory Backup: OK 00:21:11.486 Current Temperature: 323 Kelvin (50 Celsius) 00:21:11.486 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:11.486 Available Spare: 0% 00:21:11.486 Available Spare Threshold: 0% 00:21:11.486 Life Percentage Used: 0% 00:21:11.486 Data Units Read: 707 00:21:11.486 Data Units Written: 635 00:21:11.486 Host Read Commands: 39865 00:21:11.486 Host Write Commands: 39651 00:21:11.486 Controller Busy Time: 0 minutes 00:21:11.486 Power Cycles: 0 00:21:11.486 Power On Hours: 0 hours 00:21:11.486 Unsafe Shutdowns: 0 00:21:11.486 Unrecoverable Media Errors: 0 00:21:11.486 Lifetime Error Log Entries: 0 00:21:11.486 Warning Temperature Time: 0 minutes 00:21:11.486 Critical Temperature Time: 0 minutes 00:21:11.486 00:21:11.486 Number of Queues 00:21:11.486 ================ 00:21:11.486 Number of I/O Submission Queues: 64 00:21:11.486 Number of I/O Completion Queues: 64 00:21:11.486 00:21:11.486 ZNS Specific Controller Data 00:21:11.486 ============================ 00:21:11.486 Zone Append Size Limit: 0 00:21:11.486 00:21:11.486 00:21:11.486 Active Namespaces 00:21:11.486 ================= 00:21:11.486 Namespace ID:1 00:21:11.486 Error Recovery Timeout: Unlimited 00:21:11.486 Command Set Identifier: NVM (00h) 00:21:11.486 Deallocate: Supported 00:21:11.486 Deallocated/Unwritten Error: Supported 00:21:11.486 Deallocated Read Value: All 0x00 00:21:11.486 Deallocate in Write Zeroes: Not Supported 00:21:11.486 Deallocated Guard Field: 0xFFFF 00:21:11.486 Flush: Supported 00:21:11.486 Reservation: Not Supported 00:21:11.486 Metadata Transferred as: Separate Metadata Buffer 00:21:11.486 Namespace Sharing Capabilities: Private 00:21:11.486 Size (in LBAs): 1548666 (5GiB) 00:21:11.486 Capacity (in LBAs): 1548666 (5GiB) 00:21:11.486 Utilization (in LBAs): 1548666 (5GiB) 00:21:11.486 Thin Provisioning: Not Supported 00:21:11.486 Per-NS Atomic Units: No 00:21:11.486 Maximum Single Source Range Length: 128 00:21:11.486 Maximum Copy Length: 128 00:21:11.486 Maximum Source Range Count: 128 00:21:11.486 NGUID/EUI64 Never Reused: No 00:21:11.486 Namespace Write Protected: No 00:21:11.486 Number of LBA Formats: 8 00:21:11.486 Current LBA Format: LBA Format #07 00:21:11.486 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:11.486 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:11.486 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:11.486 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:11.486 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:11.486 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:11.486 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:11.486 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:11.486 00:21:11.486 NVM Specific Namespace Data 00:21:11.486 =========================== 00:21:11.486 Logical Block Storage Tag Mask: 0 00:21:11.486 Protection Information Capabilities: 00:21:11.486 16b Guard Protection Information Storage Tag Support: No 00:21:11.486 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:11.486 Storage Tag Check Read Support: No 00:21:11.486 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.486 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.486 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.486 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.486 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.486 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.486 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.486 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.486 20:19:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:21:11.486 20:19:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:21:11.752 ===================================================== 00:21:11.752 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:11.752 ===================================================== 00:21:11.752 Controller Capabilities/Features 00:21:11.752 ================================ 00:21:11.752 Vendor ID: 1b36 00:21:11.752 Subsystem Vendor ID: 1af4 00:21:11.752 Serial Number: 12341 00:21:11.752 Model Number: QEMU NVMe Ctrl 00:21:11.752 Firmware Version: 8.0.0 00:21:11.752 Recommended Arb Burst: 6 00:21:11.752 IEEE OUI Identifier: 00 54 52 00:21:11.752 Multi-path I/O 00:21:11.752 May have multiple subsystem ports: No 00:21:11.752 May have multiple controllers: No 00:21:11.752 Associated with SR-IOV VF: No 00:21:11.752 Max Data Transfer Size: 524288 00:21:11.752 Max Number of Namespaces: 256 00:21:11.752 Max Number of I/O Queues: 64 00:21:11.752 NVMe Specification Version (VS): 1.4 00:21:11.752 NVMe Specification Version (Identify): 1.4 00:21:11.752 Maximum Queue Entries: 2048 00:21:11.752 Contiguous Queues Required: Yes 00:21:11.752 Arbitration Mechanisms Supported 00:21:11.752 Weighted Round Robin: Not Supported 00:21:11.752 Vendor Specific: Not Supported 00:21:11.752 Reset Timeout: 7500 ms 00:21:11.752 Doorbell Stride: 4 bytes 00:21:11.752 NVM Subsystem Reset: Not Supported 00:21:11.752 Command Sets Supported 00:21:11.752 NVM Command Set: Supported 00:21:11.752 Boot Partition: Not Supported 00:21:11.752 Memory Page Size Minimum: 4096 bytes 00:21:11.752 Memory Page Size Maximum: 65536 bytes 00:21:11.752 Persistent Memory Region: Not Supported 00:21:11.752 Optional Asynchronous Events Supported 00:21:11.752 Namespace Attribute Notices: Supported 00:21:11.752 Firmware Activation Notices: Not Supported 00:21:11.752 ANA Change Notices: Not Supported 00:21:11.752 PLE Aggregate Log Change Notices: Not Supported 00:21:11.752 LBA Status Info Alert Notices: Not Supported 00:21:11.752 EGE Aggregate Log Change Notices: Not Supported 00:21:11.752 Normal NVM Subsystem Shutdown event: Not Supported 00:21:11.752 Zone Descriptor Change Notices: Not Supported 00:21:11.752 Discovery Log Change Notices: Not Supported 00:21:11.752 Controller Attributes 00:21:11.752 128-bit Host Identifier: Not Supported 00:21:11.752 Non-Operational Permissive Mode: Not Supported 00:21:11.752 NVM Sets: Not Supported 00:21:11.752 Read Recovery Levels: Not Supported 00:21:11.752 Endurance Groups: Not Supported 00:21:11.752 Predictable Latency Mode: Not Supported 00:21:11.752 Traffic Based Keep ALive: Not Supported 00:21:11.752 Namespace Granularity: Not Supported 00:21:11.752 SQ Associations: Not Supported 00:21:11.752 UUID List: Not Supported 00:21:11.752 Multi-Domain Subsystem: Not Supported 00:21:11.752 Fixed Capacity Management: Not Supported 00:21:11.752 Variable Capacity Management: Not Supported 00:21:11.752 Delete Endurance Group: Not Supported 00:21:11.752 Delete NVM Set: Not Supported 00:21:11.752 Extended LBA Formats Supported: Supported 00:21:11.752 Flexible Data Placement Supported: Not Supported 00:21:11.752 00:21:11.752 Controller Memory Buffer Support 00:21:11.752 ================================ 00:21:11.752 Supported: No 00:21:11.752 00:21:11.752 Persistent Memory Region Support 00:21:11.752 ================================ 00:21:11.752 Supported: No 00:21:11.752 00:21:11.753 Admin Command Set Attributes 00:21:11.753 ============================ 00:21:11.753 Security Send/Receive: Not Supported 00:21:11.753 Format NVM: Supported 00:21:11.753 Firmware Activate/Download: Not Supported 00:21:11.753 Namespace Management: Supported 00:21:11.753 Device Self-Test: Not Supported 00:21:11.753 Directives: Supported 00:21:11.753 NVMe-MI: Not Supported 00:21:11.753 Virtualization Management: Not Supported 00:21:11.753 Doorbell Buffer Config: Supported 00:21:11.753 Get LBA Status Capability: Not Supported 00:21:11.753 Command & Feature Lockdown Capability: Not Supported 00:21:11.753 Abort Command Limit: 4 00:21:11.753 Async Event Request Limit: 4 00:21:11.753 Number of Firmware Slots: N/A 00:21:11.753 Firmware Slot 1 Read-Only: N/A 00:21:11.753 Firmware Activation Without Reset: N/A 00:21:11.753 Multiple Update Detection Support: N/A 00:21:11.753 Firmware Update Granularity: No Information Provided 00:21:11.753 Per-Namespace SMART Log: Yes 00:21:11.753 Asymmetric Namespace Access Log Page: Not Supported 00:21:11.753 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:21:11.753 Command Effects Log Page: Supported 00:21:11.753 Get Log Page Extended Data: Supported 00:21:11.753 Telemetry Log Pages: Not Supported 00:21:11.753 Persistent Event Log Pages: Not Supported 00:21:11.753 Supported Log Pages Log Page: May Support 00:21:11.753 Commands Supported & Effects Log Page: Not Supported 00:21:11.753 Feature Identifiers & Effects Log Page:May Support 00:21:11.753 NVMe-MI Commands & Effects Log Page: May Support 00:21:11.753 Data Area 4 for Telemetry Log: Not Supported 00:21:11.753 Error Log Page Entries Supported: 1 00:21:11.753 Keep Alive: Not Supported 00:21:11.753 00:21:11.753 NVM Command Set Attributes 00:21:11.753 ========================== 00:21:11.753 Submission Queue Entry Size 00:21:11.753 Max: 64 00:21:11.753 Min: 64 00:21:11.753 Completion Queue Entry Size 00:21:11.753 Max: 16 00:21:11.753 Min: 16 00:21:11.753 Number of Namespaces: 256 00:21:11.753 Compare Command: Supported 00:21:11.753 Write Uncorrectable Command: Not Supported 00:21:11.753 Dataset Management Command: Supported 00:21:11.753 Write Zeroes Command: Supported 00:21:11.753 Set Features Save Field: Supported 00:21:11.753 Reservations: Not Supported 00:21:11.753 Timestamp: Supported 00:21:11.753 Copy: Supported 00:21:11.753 Volatile Write Cache: Present 00:21:11.753 Atomic Write Unit (Normal): 1 00:21:11.753 Atomic Write Unit (PFail): 1 00:21:11.753 Atomic Compare & Write Unit: 1 00:21:11.753 Fused Compare & Write: Not Supported 00:21:11.753 Scatter-Gather List 00:21:11.753 SGL Command Set: Supported 00:21:11.753 SGL Keyed: Not Supported 00:21:11.753 SGL Bit Bucket Descriptor: Not Supported 00:21:11.753 SGL Metadata Pointer: Not Supported 00:21:11.753 Oversized SGL: Not Supported 00:21:11.753 SGL Metadata Address: Not Supported 00:21:11.753 SGL Offset: Not Supported 00:21:11.753 Transport SGL Data Block: Not Supported 00:21:11.753 Replay Protected Memory Block: Not Supported 00:21:11.753 00:21:11.753 Firmware Slot Information 00:21:11.753 ========================= 00:21:11.753 Active slot: 1 00:21:11.753 Slot 1 Firmware Revision: 1.0 00:21:11.753 00:21:11.753 00:21:11.753 Commands Supported and Effects 00:21:11.753 ============================== 00:21:11.753 Admin Commands 00:21:11.753 -------------- 00:21:11.753 Delete I/O Submission Queue (00h): Supported 00:21:11.753 Create I/O Submission Queue (01h): Supported 00:21:11.753 Get Log Page (02h): Supported 00:21:11.753 Delete I/O Completion Queue (04h): Supported 00:21:11.753 Create I/O Completion Queue (05h): Supported 00:21:11.753 Identify (06h): Supported 00:21:11.753 Abort (08h): Supported 00:21:11.753 Set Features (09h): Supported 00:21:11.753 Get Features (0Ah): Supported 00:21:11.753 Asynchronous Event Request (0Ch): Supported 00:21:11.753 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:11.753 Directive Send (19h): Supported 00:21:11.753 Directive Receive (1Ah): Supported 00:21:11.753 Virtualization Management (1Ch): Supported 00:21:11.753 Doorbell Buffer Config (7Ch): Supported 00:21:11.753 Format NVM (80h): Supported LBA-Change 00:21:11.753 I/O Commands 00:21:11.753 ------------ 00:21:11.753 Flush (00h): Supported LBA-Change 00:21:11.753 Write (01h): Supported LBA-Change 00:21:11.753 Read (02h): Supported 00:21:11.753 Compare (05h): Supported 00:21:11.753 Write Zeroes (08h): Supported LBA-Change 00:21:11.753 Dataset Management (09h): Supported LBA-Change 00:21:11.753 Unknown (0Ch): Supported 00:21:11.753 Unknown (12h): Supported 00:21:11.753 Copy (19h): Supported LBA-Change 00:21:11.753 Unknown (1Dh): Supported LBA-Change 00:21:11.753 00:21:11.753 Error Log 00:21:11.753 ========= 00:21:11.753 00:21:11.753 Arbitration 00:21:11.753 =========== 00:21:11.753 Arbitration Burst: no limit 00:21:11.753 00:21:11.753 Power Management 00:21:11.753 ================ 00:21:11.753 Number of Power States: 1 00:21:11.753 Current Power State: Power State #0 00:21:11.753 Power State #0: 00:21:11.753 Max Power: 25.00 W 00:21:11.753 Non-Operational State: Operational 00:21:11.753 Entry Latency: 16 microseconds 00:21:11.753 Exit Latency: 4 microseconds 00:21:11.753 Relative Read Throughput: 0 00:21:11.753 Relative Read Latency: 0 00:21:11.753 Relative Write Throughput: 0 00:21:11.753 Relative Write Latency: 0 00:21:11.753 Idle Power: Not Reported 00:21:11.753 Active Power: Not Reported 00:21:11.753 Non-Operational Permissive Mode: Not Supported 00:21:11.753 00:21:11.753 Health Information 00:21:11.753 ================== 00:21:11.753 Critical Warnings: 00:21:11.753 Available Spare Space: OK 00:21:11.753 Temperature: OK 00:21:11.753 Device Reliability: OK 00:21:11.753 Read Only: No 00:21:11.753 Volatile Memory Backup: OK 00:21:11.753 Current Temperature: 323 Kelvin (50 Celsius) 00:21:11.753 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:11.753 Available Spare: 0% 00:21:11.753 Available Spare Threshold: 0% 00:21:11.753 Life Percentage Used: 0% 00:21:11.753 Data Units Read: 1052 00:21:11.753 Data Units Written: 916 00:21:11.753 Host Read Commands: 59002 00:21:11.753 Host Write Commands: 57750 00:21:11.753 Controller Busy Time: 0 minutes 00:21:11.753 Power Cycles: 0 00:21:11.753 Power On Hours: 0 hours 00:21:11.753 Unsafe Shutdowns: 0 00:21:11.754 Unrecoverable Media Errors: 0 00:21:11.754 Lifetime Error Log Entries: 0 00:21:11.754 Warning Temperature Time: 0 minutes 00:21:11.754 Critical Temperature Time: 0 minutes 00:21:11.754 00:21:11.754 Number of Queues 00:21:11.754 ================ 00:21:11.754 Number of I/O Submission Queues: 64 00:21:11.754 Number of I/O Completion Queues: 64 00:21:11.754 00:21:11.754 ZNS Specific Controller Data 00:21:11.754 ============================ 00:21:11.754 Zone Append Size Limit: 0 00:21:11.754 00:21:11.754 00:21:11.754 Active Namespaces 00:21:11.754 ================= 00:21:11.754 Namespace ID:1 00:21:11.754 Error Recovery Timeout: Unlimited 00:21:11.754 Command Set Identifier: NVM (00h) 00:21:11.754 Deallocate: Supported 00:21:11.754 Deallocated/Unwritten Error: Supported 00:21:11.754 Deallocated Read Value: All 0x00 00:21:11.754 Deallocate in Write Zeroes: Not Supported 00:21:11.754 Deallocated Guard Field: 0xFFFF 00:21:11.754 Flush: Supported 00:21:11.754 Reservation: Not Supported 00:21:11.754 Namespace Sharing Capabilities: Private 00:21:11.754 Size (in LBAs): 1310720 (5GiB) 00:21:11.754 Capacity (in LBAs): 1310720 (5GiB) 00:21:11.754 Utilization (in LBAs): 1310720 (5GiB) 00:21:11.754 Thin Provisioning: Not Supported 00:21:11.754 Per-NS Atomic Units: No 00:21:11.754 Maximum Single Source Range Length: 128 00:21:11.754 Maximum Copy Length: 128 00:21:11.754 Maximum Source Range Count: 128 00:21:11.754 NGUID/EUI64 Never Reused: No 00:21:11.754 Namespace Write Protected: No 00:21:11.754 Number of LBA Formats: 8 00:21:11.754 Current LBA Format: LBA Format #04 00:21:11.754 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:11.754 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:11.754 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:11.754 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:11.754 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:11.754 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:11.754 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:11.754 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:11.754 00:21:11.754 NVM Specific Namespace Data 00:21:11.754 =========================== 00:21:11.754 Logical Block Storage Tag Mask: 0 00:21:11.754 Protection Information Capabilities: 00:21:11.754 16b Guard Protection Information Storage Tag Support: No 00:21:11.754 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:11.754 Storage Tag Check Read Support: No 00:21:11.754 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.754 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.754 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.754 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.754 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.754 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.754 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.754 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:11.754 20:19:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:21:11.754 20:19:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:21:12.016 ===================================================== 00:21:12.016 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:12.016 ===================================================== 00:21:12.016 Controller Capabilities/Features 00:21:12.016 ================================ 00:21:12.016 Vendor ID: 1b36 00:21:12.016 Subsystem Vendor ID: 1af4 00:21:12.016 Serial Number: 12342 00:21:12.016 Model Number: QEMU NVMe Ctrl 00:21:12.016 Firmware Version: 8.0.0 00:21:12.016 Recommended Arb Burst: 6 00:21:12.016 IEEE OUI Identifier: 00 54 52 00:21:12.016 Multi-path I/O 00:21:12.017 May have multiple subsystem ports: No 00:21:12.017 May have multiple controllers: No 00:21:12.017 Associated with SR-IOV VF: No 00:21:12.017 Max Data Transfer Size: 524288 00:21:12.017 Max Number of Namespaces: 256 00:21:12.017 Max Number of I/O Queues: 64 00:21:12.017 NVMe Specification Version (VS): 1.4 00:21:12.017 NVMe Specification Version (Identify): 1.4 00:21:12.017 Maximum Queue Entries: 2048 00:21:12.017 Contiguous Queues Required: Yes 00:21:12.017 Arbitration Mechanisms Supported 00:21:12.017 Weighted Round Robin: Not Supported 00:21:12.017 Vendor Specific: Not Supported 00:21:12.017 Reset Timeout: 7500 ms 00:21:12.017 Doorbell Stride: 4 bytes 00:21:12.017 NVM Subsystem Reset: Not Supported 00:21:12.017 Command Sets Supported 00:21:12.017 NVM Command Set: Supported 00:21:12.017 Boot Partition: Not Supported 00:21:12.017 Memory Page Size Minimum: 4096 bytes 00:21:12.017 Memory Page Size Maximum: 65536 bytes 00:21:12.017 Persistent Memory Region: Not Supported 00:21:12.017 Optional Asynchronous Events Supported 00:21:12.017 Namespace Attribute Notices: Supported 00:21:12.017 Firmware Activation Notices: Not Supported 00:21:12.017 ANA Change Notices: Not Supported 00:21:12.017 PLE Aggregate Log Change Notices: Not Supported 00:21:12.017 LBA Status Info Alert Notices: Not Supported 00:21:12.017 EGE Aggregate Log Change Notices: Not Supported 00:21:12.017 Normal NVM Subsystem Shutdown event: Not Supported 00:21:12.017 Zone Descriptor Change Notices: Not Supported 00:21:12.017 Discovery Log Change Notices: Not Supported 00:21:12.017 Controller Attributes 00:21:12.017 128-bit Host Identifier: Not Supported 00:21:12.017 Non-Operational Permissive Mode: Not Supported 00:21:12.017 NVM Sets: Not Supported 00:21:12.017 Read Recovery Levels: Not Supported 00:21:12.017 Endurance Groups: Not Supported 00:21:12.017 Predictable Latency Mode: Not Supported 00:21:12.017 Traffic Based Keep ALive: Not Supported 00:21:12.017 Namespace Granularity: Not Supported 00:21:12.017 SQ Associations: Not Supported 00:21:12.017 UUID List: Not Supported 00:21:12.017 Multi-Domain Subsystem: Not Supported 00:21:12.017 Fixed Capacity Management: Not Supported 00:21:12.017 Variable Capacity Management: Not Supported 00:21:12.017 Delete Endurance Group: Not Supported 00:21:12.017 Delete NVM Set: Not Supported 00:21:12.017 Extended LBA Formats Supported: Supported 00:21:12.017 Flexible Data Placement Supported: Not Supported 00:21:12.017 00:21:12.017 Controller Memory Buffer Support 00:21:12.017 ================================ 00:21:12.017 Supported: No 00:21:12.017 00:21:12.017 Persistent Memory Region Support 00:21:12.017 ================================ 00:21:12.017 Supported: No 00:21:12.017 00:21:12.017 Admin Command Set Attributes 00:21:12.017 ============================ 00:21:12.017 Security Send/Receive: Not Supported 00:21:12.017 Format NVM: Supported 00:21:12.017 Firmware Activate/Download: Not Supported 00:21:12.017 Namespace Management: Supported 00:21:12.017 Device Self-Test: Not Supported 00:21:12.017 Directives: Supported 00:21:12.017 NVMe-MI: Not Supported 00:21:12.017 Virtualization Management: Not Supported 00:21:12.017 Doorbell Buffer Config: Supported 00:21:12.017 Get LBA Status Capability: Not Supported 00:21:12.017 Command & Feature Lockdown Capability: Not Supported 00:21:12.017 Abort Command Limit: 4 00:21:12.017 Async Event Request Limit: 4 00:21:12.017 Number of Firmware Slots: N/A 00:21:12.017 Firmware Slot 1 Read-Only: N/A 00:21:12.017 Firmware Activation Without Reset: N/A 00:21:12.017 Multiple Update Detection Support: N/A 00:21:12.017 Firmware Update Granularity: No Information Provided 00:21:12.017 Per-Namespace SMART Log: Yes 00:21:12.017 Asymmetric Namespace Access Log Page: Not Supported 00:21:12.017 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:21:12.017 Command Effects Log Page: Supported 00:21:12.017 Get Log Page Extended Data: Supported 00:21:12.017 Telemetry Log Pages: Not Supported 00:21:12.017 Persistent Event Log Pages: Not Supported 00:21:12.017 Supported Log Pages Log Page: May Support 00:21:12.017 Commands Supported & Effects Log Page: Not Supported 00:21:12.017 Feature Identifiers & Effects Log Page:May Support 00:21:12.017 NVMe-MI Commands & Effects Log Page: May Support 00:21:12.017 Data Area 4 for Telemetry Log: Not Supported 00:21:12.017 Error Log Page Entries Supported: 1 00:21:12.017 Keep Alive: Not Supported 00:21:12.017 00:21:12.017 NVM Command Set Attributes 00:21:12.017 ========================== 00:21:12.017 Submission Queue Entry Size 00:21:12.017 Max: 64 00:21:12.017 Min: 64 00:21:12.017 Completion Queue Entry Size 00:21:12.017 Max: 16 00:21:12.017 Min: 16 00:21:12.017 Number of Namespaces: 256 00:21:12.017 Compare Command: Supported 00:21:12.017 Write Uncorrectable Command: Not Supported 00:21:12.017 Dataset Management Command: Supported 00:21:12.017 Write Zeroes Command: Supported 00:21:12.017 Set Features Save Field: Supported 00:21:12.017 Reservations: Not Supported 00:21:12.017 Timestamp: Supported 00:21:12.017 Copy: Supported 00:21:12.017 Volatile Write Cache: Present 00:21:12.017 Atomic Write Unit (Normal): 1 00:21:12.017 Atomic Write Unit (PFail): 1 00:21:12.017 Atomic Compare & Write Unit: 1 00:21:12.017 Fused Compare & Write: Not Supported 00:21:12.017 Scatter-Gather List 00:21:12.017 SGL Command Set: Supported 00:21:12.017 SGL Keyed: Not Supported 00:21:12.017 SGL Bit Bucket Descriptor: Not Supported 00:21:12.017 SGL Metadata Pointer: Not Supported 00:21:12.017 Oversized SGL: Not Supported 00:21:12.017 SGL Metadata Address: Not Supported 00:21:12.017 SGL Offset: Not Supported 00:21:12.017 Transport SGL Data Block: Not Supported 00:21:12.017 Replay Protected Memory Block: Not Supported 00:21:12.017 00:21:12.017 Firmware Slot Information 00:21:12.017 ========================= 00:21:12.017 Active slot: 1 00:21:12.017 Slot 1 Firmware Revision: 1.0 00:21:12.017 00:21:12.017 00:21:12.017 Commands Supported and Effects 00:21:12.017 ============================== 00:21:12.017 Admin Commands 00:21:12.017 -------------- 00:21:12.017 Delete I/O Submission Queue (00h): Supported 00:21:12.017 Create I/O Submission Queue (01h): Supported 00:21:12.017 Get Log Page (02h): Supported 00:21:12.017 Delete I/O Completion Queue (04h): Supported 00:21:12.017 Create I/O Completion Queue (05h): Supported 00:21:12.017 Identify (06h): Supported 00:21:12.017 Abort (08h): Supported 00:21:12.017 Set Features (09h): Supported 00:21:12.017 Get Features (0Ah): Supported 00:21:12.017 Asynchronous Event Request (0Ch): Supported 00:21:12.017 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:12.017 Directive Send (19h): Supported 00:21:12.017 Directive Receive (1Ah): Supported 00:21:12.017 Virtualization Management (1Ch): Supported 00:21:12.017 Doorbell Buffer Config (7Ch): Supported 00:21:12.017 Format NVM (80h): Supported LBA-Change 00:21:12.017 I/O Commands 00:21:12.017 ------------ 00:21:12.017 Flush (00h): Supported LBA-Change 00:21:12.017 Write (01h): Supported LBA-Change 00:21:12.017 Read (02h): Supported 00:21:12.017 Compare (05h): Supported 00:21:12.017 Write Zeroes (08h): Supported LBA-Change 00:21:12.017 Dataset Management (09h): Supported LBA-Change 00:21:12.017 Unknown (0Ch): Supported 00:21:12.017 Unknown (12h): Supported 00:21:12.017 Copy (19h): Supported LBA-Change 00:21:12.017 Unknown (1Dh): Supported LBA-Change 00:21:12.017 00:21:12.017 Error Log 00:21:12.017 ========= 00:21:12.017 00:21:12.017 Arbitration 00:21:12.017 =========== 00:21:12.017 Arbitration Burst: no limit 00:21:12.017 00:21:12.017 Power Management 00:21:12.017 ================ 00:21:12.017 Number of Power States: 1 00:21:12.017 Current Power State: Power State #0 00:21:12.017 Power State #0: 00:21:12.018 Max Power: 25.00 W 00:21:12.018 Non-Operational State: Operational 00:21:12.018 Entry Latency: 16 microseconds 00:21:12.018 Exit Latency: 4 microseconds 00:21:12.018 Relative Read Throughput: 0 00:21:12.018 Relative Read Latency: 0 00:21:12.018 Relative Write Throughput: 0 00:21:12.018 Relative Write Latency: 0 00:21:12.018 Idle Power: Not Reported 00:21:12.018 Active Power: Not Reported 00:21:12.018 Non-Operational Permissive Mode: Not Supported 00:21:12.018 00:21:12.018 Health Information 00:21:12.018 ================== 00:21:12.018 Critical Warnings: 00:21:12.018 Available Spare Space: OK 00:21:12.018 Temperature: OK 00:21:12.018 Device Reliability: OK 00:21:12.018 Read Only: No 00:21:12.018 Volatile Memory Backup: OK 00:21:12.018 Current Temperature: 323 Kelvin (50 Celsius) 00:21:12.018 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:12.018 Available Spare: 0% 00:21:12.018 Available Spare Threshold: 0% 00:21:12.018 Life Percentage Used: 0% 00:21:12.018 Data Units Read: 2279 00:21:12.018 Data Units Written: 2066 00:21:12.018 Host Read Commands: 122142 00:21:12.018 Host Write Commands: 120411 00:21:12.018 Controller Busy Time: 0 minutes 00:21:12.018 Power Cycles: 0 00:21:12.018 Power On Hours: 0 hours 00:21:12.018 Unsafe Shutdowns: 0 00:21:12.018 Unrecoverable Media Errors: 0 00:21:12.018 Lifetime Error Log Entries: 0 00:21:12.018 Warning Temperature Time: 0 minutes 00:21:12.018 Critical Temperature Time: 0 minutes 00:21:12.018 00:21:12.018 Number of Queues 00:21:12.018 ================ 00:21:12.018 Number of I/O Submission Queues: 64 00:21:12.018 Number of I/O Completion Queues: 64 00:21:12.018 00:21:12.018 ZNS Specific Controller Data 00:21:12.018 ============================ 00:21:12.018 Zone Append Size Limit: 0 00:21:12.018 00:21:12.018 00:21:12.018 Active Namespaces 00:21:12.018 ================= 00:21:12.018 Namespace ID:1 00:21:12.018 Error Recovery Timeout: Unlimited 00:21:12.018 Command Set Identifier: NVM (00h) 00:21:12.018 Deallocate: Supported 00:21:12.018 Deallocated/Unwritten Error: Supported 00:21:12.018 Deallocated Read Value: All 0x00 00:21:12.018 Deallocate in Write Zeroes: Not Supported 00:21:12.018 Deallocated Guard Field: 0xFFFF 00:21:12.018 Flush: Supported 00:21:12.018 Reservation: Not Supported 00:21:12.018 Namespace Sharing Capabilities: Private 00:21:12.018 Size (in LBAs): 1048576 (4GiB) 00:21:12.018 Capacity (in LBAs): 1048576 (4GiB) 00:21:12.018 Utilization (in LBAs): 1048576 (4GiB) 00:21:12.018 Thin Provisioning: Not Supported 00:21:12.018 Per-NS Atomic Units: No 00:21:12.018 Maximum Single Source Range Length: 128 00:21:12.018 Maximum Copy Length: 128 00:21:12.018 Maximum Source Range Count: 128 00:21:12.018 NGUID/EUI64 Never Reused: No 00:21:12.018 Namespace Write Protected: No 00:21:12.018 Number of LBA Formats: 8 00:21:12.018 Current LBA Format: LBA Format #04 00:21:12.018 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:12.018 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:12.018 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:12.018 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:12.018 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:12.018 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:12.018 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:12.018 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:12.018 00:21:12.018 NVM Specific Namespace Data 00:21:12.018 =========================== 00:21:12.018 Logical Block Storage Tag Mask: 0 00:21:12.018 Protection Information Capabilities: 00:21:12.018 16b Guard Protection Information Storage Tag Support: No 00:21:12.018 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:12.018 Storage Tag Check Read Support: No 00:21:12.018 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Namespace ID:2 00:21:12.018 Error Recovery Timeout: Unlimited 00:21:12.018 Command Set Identifier: NVM (00h) 00:21:12.018 Deallocate: Supported 00:21:12.018 Deallocated/Unwritten Error: Supported 00:21:12.018 Deallocated Read Value: All 0x00 00:21:12.018 Deallocate in Write Zeroes: Not Supported 00:21:12.018 Deallocated Guard Field: 0xFFFF 00:21:12.018 Flush: Supported 00:21:12.018 Reservation: Not Supported 00:21:12.018 Namespace Sharing Capabilities: Private 00:21:12.018 Size (in LBAs): 1048576 (4GiB) 00:21:12.018 Capacity (in LBAs): 1048576 (4GiB) 00:21:12.018 Utilization (in LBAs): 1048576 (4GiB) 00:21:12.018 Thin Provisioning: Not Supported 00:21:12.018 Per-NS Atomic Units: No 00:21:12.018 Maximum Single Source Range Length: 128 00:21:12.018 Maximum Copy Length: 128 00:21:12.018 Maximum Source Range Count: 128 00:21:12.018 NGUID/EUI64 Never Reused: No 00:21:12.018 Namespace Write Protected: No 00:21:12.018 Number of LBA Formats: 8 00:21:12.018 Current LBA Format: LBA Format #04 00:21:12.018 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:12.018 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:12.018 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:12.018 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:12.018 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:12.018 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:12.018 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:12.018 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:12.018 00:21:12.018 NVM Specific Namespace Data 00:21:12.018 =========================== 00:21:12.018 Logical Block Storage Tag Mask: 0 00:21:12.018 Protection Information Capabilities: 00:21:12.018 16b Guard Protection Information Storage Tag Support: No 00:21:12.018 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:12.018 Storage Tag Check Read Support: No 00:21:12.018 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.018 Namespace ID:3 00:21:12.018 Error Recovery Timeout: Unlimited 00:21:12.018 Command Set Identifier: NVM (00h) 00:21:12.018 Deallocate: Supported 00:21:12.018 Deallocated/Unwritten Error: Supported 00:21:12.018 Deallocated Read Value: All 0x00 00:21:12.018 Deallocate in Write Zeroes: Not Supported 00:21:12.018 Deallocated Guard Field: 0xFFFF 00:21:12.018 Flush: Supported 00:21:12.018 Reservation: Not Supported 00:21:12.018 Namespace Sharing Capabilities: Private 00:21:12.018 Size (in LBAs): 1048576 (4GiB) 00:21:12.018 Capacity (in LBAs): 1048576 (4GiB) 00:21:12.018 Utilization (in LBAs): 1048576 (4GiB) 00:21:12.018 Thin Provisioning: Not Supported 00:21:12.018 Per-NS Atomic Units: No 00:21:12.018 Maximum Single Source Range Length: 128 00:21:12.018 Maximum Copy Length: 128 00:21:12.018 Maximum Source Range Count: 128 00:21:12.018 NGUID/EUI64 Never Reused: No 00:21:12.018 Namespace Write Protected: No 00:21:12.018 Number of LBA Formats: 8 00:21:12.018 Current LBA Format: LBA Format #04 00:21:12.019 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:12.019 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:12.019 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:12.019 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:12.019 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:12.019 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:12.019 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:12.019 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:12.019 00:21:12.019 NVM Specific Namespace Data 00:21:12.019 =========================== 00:21:12.019 Logical Block Storage Tag Mask: 0 00:21:12.019 Protection Information Capabilities: 00:21:12.019 16b Guard Protection Information Storage Tag Support: No 00:21:12.019 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:12.019 Storage Tag Check Read Support: No 00:21:12.019 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.019 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.019 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.019 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.019 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.019 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.019 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.019 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.019 20:19:07 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:21:12.019 20:19:07 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:21:12.276 ===================================================== 00:21:12.276 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:12.276 ===================================================== 00:21:12.276 Controller Capabilities/Features 00:21:12.276 ================================ 00:21:12.276 Vendor ID: 1b36 00:21:12.276 Subsystem Vendor ID: 1af4 00:21:12.276 Serial Number: 12343 00:21:12.276 Model Number: QEMU NVMe Ctrl 00:21:12.276 Firmware Version: 8.0.0 00:21:12.276 Recommended Arb Burst: 6 00:21:12.276 IEEE OUI Identifier: 00 54 52 00:21:12.276 Multi-path I/O 00:21:12.276 May have multiple subsystem ports: No 00:21:12.276 May have multiple controllers: Yes 00:21:12.276 Associated with SR-IOV VF: No 00:21:12.276 Max Data Transfer Size: 524288 00:21:12.276 Max Number of Namespaces: 256 00:21:12.276 Max Number of I/O Queues: 64 00:21:12.276 NVMe Specification Version (VS): 1.4 00:21:12.276 NVMe Specification Version (Identify): 1.4 00:21:12.276 Maximum Queue Entries: 2048 00:21:12.276 Contiguous Queues Required: Yes 00:21:12.276 Arbitration Mechanisms Supported 00:21:12.276 Weighted Round Robin: Not Supported 00:21:12.276 Vendor Specific: Not Supported 00:21:12.276 Reset Timeout: 7500 ms 00:21:12.276 Doorbell Stride: 4 bytes 00:21:12.276 NVM Subsystem Reset: Not Supported 00:21:12.276 Command Sets Supported 00:21:12.276 NVM Command Set: Supported 00:21:12.276 Boot Partition: Not Supported 00:21:12.276 Memory Page Size Minimum: 4096 bytes 00:21:12.276 Memory Page Size Maximum: 65536 bytes 00:21:12.276 Persistent Memory Region: Not Supported 00:21:12.276 Optional Asynchronous Events Supported 00:21:12.276 Namespace Attribute Notices: Supported 00:21:12.276 Firmware Activation Notices: Not Supported 00:21:12.276 ANA Change Notices: Not Supported 00:21:12.276 PLE Aggregate Log Change Notices: Not Supported 00:21:12.276 LBA Status Info Alert Notices: Not Supported 00:21:12.276 EGE Aggregate Log Change Notices: Not Supported 00:21:12.276 Normal NVM Subsystem Shutdown event: Not Supported 00:21:12.276 Zone Descriptor Change Notices: Not Supported 00:21:12.276 Discovery Log Change Notices: Not Supported 00:21:12.276 Controller Attributes 00:21:12.276 128-bit Host Identifier: Not Supported 00:21:12.276 Non-Operational Permissive Mode: Not Supported 00:21:12.276 NVM Sets: Not Supported 00:21:12.276 Read Recovery Levels: Not Supported 00:21:12.276 Endurance Groups: Supported 00:21:12.276 Predictable Latency Mode: Not Supported 00:21:12.276 Traffic Based Keep ALive: Not Supported 00:21:12.276 Namespace Granularity: Not Supported 00:21:12.276 SQ Associations: Not Supported 00:21:12.276 UUID List: Not Supported 00:21:12.276 Multi-Domain Subsystem: Not Supported 00:21:12.276 Fixed Capacity Management: Not Supported 00:21:12.276 Variable Capacity Management: Not Supported 00:21:12.276 Delete Endurance Group: Not Supported 00:21:12.276 Delete NVM Set: Not Supported 00:21:12.276 Extended LBA Formats Supported: Supported 00:21:12.276 Flexible Data Placement Supported: Supported 00:21:12.276 00:21:12.276 Controller Memory Buffer Support 00:21:12.276 ================================ 00:21:12.276 Supported: No 00:21:12.276 00:21:12.276 Persistent Memory Region Support 00:21:12.276 ================================ 00:21:12.276 Supported: No 00:21:12.276 00:21:12.276 Admin Command Set Attributes 00:21:12.276 ============================ 00:21:12.276 Security Send/Receive: Not Supported 00:21:12.276 Format NVM: Supported 00:21:12.276 Firmware Activate/Download: Not Supported 00:21:12.276 Namespace Management: Supported 00:21:12.276 Device Self-Test: Not Supported 00:21:12.276 Directives: Supported 00:21:12.276 NVMe-MI: Not Supported 00:21:12.276 Virtualization Management: Not Supported 00:21:12.276 Doorbell Buffer Config: Supported 00:21:12.276 Get LBA Status Capability: Not Supported 00:21:12.276 Command & Feature Lockdown Capability: Not Supported 00:21:12.276 Abort Command Limit: 4 00:21:12.276 Async Event Request Limit: 4 00:21:12.276 Number of Firmware Slots: N/A 00:21:12.276 Firmware Slot 1 Read-Only: N/A 00:21:12.276 Firmware Activation Without Reset: N/A 00:21:12.276 Multiple Update Detection Support: N/A 00:21:12.276 Firmware Update Granularity: No Information Provided 00:21:12.276 Per-Namespace SMART Log: Yes 00:21:12.276 Asymmetric Namespace Access Log Page: Not Supported 00:21:12.276 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:21:12.276 Command Effects Log Page: Supported 00:21:12.276 Get Log Page Extended Data: Supported 00:21:12.276 Telemetry Log Pages: Not Supported 00:21:12.276 Persistent Event Log Pages: Not Supported 00:21:12.276 Supported Log Pages Log Page: May Support 00:21:12.276 Commands Supported & Effects Log Page: Not Supported 00:21:12.276 Feature Identifiers & Effects Log Page:May Support 00:21:12.276 NVMe-MI Commands & Effects Log Page: May Support 00:21:12.276 Data Area 4 for Telemetry Log: Not Supported 00:21:12.276 Error Log Page Entries Supported: 1 00:21:12.276 Keep Alive: Not Supported 00:21:12.276 00:21:12.276 NVM Command Set Attributes 00:21:12.276 ========================== 00:21:12.276 Submission Queue Entry Size 00:21:12.276 Max: 64 00:21:12.276 Min: 64 00:21:12.277 Completion Queue Entry Size 00:21:12.277 Max: 16 00:21:12.277 Min: 16 00:21:12.277 Number of Namespaces: 256 00:21:12.277 Compare Command: Supported 00:21:12.277 Write Uncorrectable Command: Not Supported 00:21:12.277 Dataset Management Command: Supported 00:21:12.277 Write Zeroes Command: Supported 00:21:12.277 Set Features Save Field: Supported 00:21:12.277 Reservations: Not Supported 00:21:12.277 Timestamp: Supported 00:21:12.277 Copy: Supported 00:21:12.277 Volatile Write Cache: Present 00:21:12.277 Atomic Write Unit (Normal): 1 00:21:12.277 Atomic Write Unit (PFail): 1 00:21:12.277 Atomic Compare & Write Unit: 1 00:21:12.277 Fused Compare & Write: Not Supported 00:21:12.277 Scatter-Gather List 00:21:12.277 SGL Command Set: Supported 00:21:12.277 SGL Keyed: Not Supported 00:21:12.277 SGL Bit Bucket Descriptor: Not Supported 00:21:12.277 SGL Metadata Pointer: Not Supported 00:21:12.277 Oversized SGL: Not Supported 00:21:12.277 SGL Metadata Address: Not Supported 00:21:12.277 SGL Offset: Not Supported 00:21:12.277 Transport SGL Data Block: Not Supported 00:21:12.277 Replay Protected Memory Block: Not Supported 00:21:12.277 00:21:12.277 Firmware Slot Information 00:21:12.277 ========================= 00:21:12.277 Active slot: 1 00:21:12.277 Slot 1 Firmware Revision: 1.0 00:21:12.277 00:21:12.277 00:21:12.277 Commands Supported and Effects 00:21:12.277 ============================== 00:21:12.277 Admin Commands 00:21:12.277 -------------- 00:21:12.277 Delete I/O Submission Queue (00h): Supported 00:21:12.277 Create I/O Submission Queue (01h): Supported 00:21:12.277 Get Log Page (02h): Supported 00:21:12.277 Delete I/O Completion Queue (04h): Supported 00:21:12.277 Create I/O Completion Queue (05h): Supported 00:21:12.277 Identify (06h): Supported 00:21:12.277 Abort (08h): Supported 00:21:12.277 Set Features (09h): Supported 00:21:12.277 Get Features (0Ah): Supported 00:21:12.277 Asynchronous Event Request (0Ch): Supported 00:21:12.277 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:12.277 Directive Send (19h): Supported 00:21:12.277 Directive Receive (1Ah): Supported 00:21:12.277 Virtualization Management (1Ch): Supported 00:21:12.277 Doorbell Buffer Config (7Ch): Supported 00:21:12.277 Format NVM (80h): Supported LBA-Change 00:21:12.277 I/O Commands 00:21:12.277 ------------ 00:21:12.277 Flush (00h): Supported LBA-Change 00:21:12.277 Write (01h): Supported LBA-Change 00:21:12.277 Read (02h): Supported 00:21:12.277 Compare (05h): Supported 00:21:12.277 Write Zeroes (08h): Supported LBA-Change 00:21:12.277 Dataset Management (09h): Supported LBA-Change 00:21:12.277 Unknown (0Ch): Supported 00:21:12.277 Unknown (12h): Supported 00:21:12.277 Copy (19h): Supported LBA-Change 00:21:12.277 Unknown (1Dh): Supported LBA-Change 00:21:12.277 00:21:12.277 Error Log 00:21:12.277 ========= 00:21:12.277 00:21:12.277 Arbitration 00:21:12.277 =========== 00:21:12.277 Arbitration Burst: no limit 00:21:12.277 00:21:12.277 Power Management 00:21:12.277 ================ 00:21:12.277 Number of Power States: 1 00:21:12.277 Current Power State: Power State #0 00:21:12.277 Power State #0: 00:21:12.277 Max Power: 25.00 W 00:21:12.277 Non-Operational State: Operational 00:21:12.277 Entry Latency: 16 microseconds 00:21:12.277 Exit Latency: 4 microseconds 00:21:12.277 Relative Read Throughput: 0 00:21:12.277 Relative Read Latency: 0 00:21:12.277 Relative Write Throughput: 0 00:21:12.277 Relative Write Latency: 0 00:21:12.277 Idle Power: Not Reported 00:21:12.277 Active Power: Not Reported 00:21:12.277 Non-Operational Permissive Mode: Not Supported 00:21:12.277 00:21:12.277 Health Information 00:21:12.277 ================== 00:21:12.277 Critical Warnings: 00:21:12.277 Available Spare Space: OK 00:21:12.277 Temperature: OK 00:21:12.277 Device Reliability: OK 00:21:12.277 Read Only: No 00:21:12.277 Volatile Memory Backup: OK 00:21:12.277 Current Temperature: 323 Kelvin (50 Celsius) 00:21:12.277 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:12.277 Available Spare: 0% 00:21:12.277 Available Spare Threshold: 0% 00:21:12.277 Life Percentage Used: 0% 00:21:12.277 Data Units Read: 880 00:21:12.277 Data Units Written: 809 00:21:12.277 Host Read Commands: 41809 00:21:12.277 Host Write Commands: 41233 00:21:12.277 Controller Busy Time: 0 minutes 00:21:12.277 Power Cycles: 0 00:21:12.277 Power On Hours: 0 hours 00:21:12.277 Unsafe Shutdowns: 0 00:21:12.277 Unrecoverable Media Errors: 0 00:21:12.277 Lifetime Error Log Entries: 0 00:21:12.277 Warning Temperature Time: 0 minutes 00:21:12.277 Critical Temperature Time: 0 minutes 00:21:12.277 00:21:12.277 Number of Queues 00:21:12.277 ================ 00:21:12.277 Number of I/O Submission Queues: 64 00:21:12.277 Number of I/O Completion Queues: 64 00:21:12.277 00:21:12.277 ZNS Specific Controller Data 00:21:12.277 ============================ 00:21:12.277 Zone Append Size Limit: 0 00:21:12.277 00:21:12.277 00:21:12.277 Active Namespaces 00:21:12.277 ================= 00:21:12.277 Namespace ID:1 00:21:12.277 Error Recovery Timeout: Unlimited 00:21:12.277 Command Set Identifier: NVM (00h) 00:21:12.277 Deallocate: Supported 00:21:12.277 Deallocated/Unwritten Error: Supported 00:21:12.277 Deallocated Read Value: All 0x00 00:21:12.277 Deallocate in Write Zeroes: Not Supported 00:21:12.277 Deallocated Guard Field: 0xFFFF 00:21:12.277 Flush: Supported 00:21:12.277 Reservation: Not Supported 00:21:12.277 Namespace Sharing Capabilities: Multiple Controllers 00:21:12.277 Size (in LBAs): 262144 (1GiB) 00:21:12.277 Capacity (in LBAs): 262144 (1GiB) 00:21:12.277 Utilization (in LBAs): 262144 (1GiB) 00:21:12.277 Thin Provisioning: Not Supported 00:21:12.277 Per-NS Atomic Units: No 00:21:12.277 Maximum Single Source Range Length: 128 00:21:12.277 Maximum Copy Length: 128 00:21:12.277 Maximum Source Range Count: 128 00:21:12.277 NGUID/EUI64 Never Reused: No 00:21:12.277 Namespace Write Protected: No 00:21:12.277 Endurance group ID: 1 00:21:12.277 Number of LBA Formats: 8 00:21:12.277 Current LBA Format: LBA Format #04 00:21:12.277 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:12.277 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:12.277 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:12.277 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:12.277 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:12.277 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:12.277 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:12.277 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:12.277 00:21:12.277 Get Feature FDP: 00:21:12.277 ================ 00:21:12.277 Enabled: Yes 00:21:12.277 FDP configuration index: 0 00:21:12.277 00:21:12.277 FDP configurations log page 00:21:12.277 =========================== 00:21:12.277 Number of FDP configurations: 1 00:21:12.277 Version: 0 00:21:12.277 Size: 112 00:21:12.277 FDP Configuration Descriptor: 0 00:21:12.277 Descriptor Size: 96 00:21:12.277 Reclaim Group Identifier format: 2 00:21:12.277 FDP Volatile Write Cache: Not Present 00:21:12.277 FDP Configuration: Valid 00:21:12.277 Vendor Specific Size: 0 00:21:12.277 Number of Reclaim Groups: 2 00:21:12.277 Number of Recalim Unit Handles: 8 00:21:12.277 Max Placement Identifiers: 128 00:21:12.277 Number of Namespaces Suppprted: 256 00:21:12.277 Reclaim unit Nominal Size: 6000000 bytes 00:21:12.277 Estimated Reclaim Unit Time Limit: Not Reported 00:21:12.277 RUH Desc #000: RUH Type: Initially Isolated 00:21:12.277 RUH Desc #001: RUH Type: Initially Isolated 00:21:12.277 RUH Desc #002: RUH Type: Initially Isolated 00:21:12.277 RUH Desc #003: RUH Type: Initially Isolated 00:21:12.277 RUH Desc #004: RUH Type: Initially Isolated 00:21:12.277 RUH Desc #005: RUH Type: Initially Isolated 00:21:12.277 RUH Desc #006: RUH Type: Initially Isolated 00:21:12.277 RUH Desc #007: RUH Type: Initially Isolated 00:21:12.277 00:21:12.277 FDP reclaim unit handle usage log page 00:21:12.277 ====================================== 00:21:12.277 Number of Reclaim Unit Handles: 8 00:21:12.277 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:21:12.277 RUH Usage Desc #001: RUH Attributes: Unused 00:21:12.277 RUH Usage Desc #002: RUH Attributes: Unused 00:21:12.277 RUH Usage Desc #003: RUH Attributes: Unused 00:21:12.277 RUH Usage Desc #004: RUH Attributes: Unused 00:21:12.277 RUH Usage Desc #005: RUH Attributes: Unused 00:21:12.277 RUH Usage Desc #006: RUH Attributes: Unused 00:21:12.277 RUH Usage Desc #007: RUH Attributes: Unused 00:21:12.277 00:21:12.277 FDP statistics log page 00:21:12.277 ======================= 00:21:12.277 Host bytes with metadata written: 513646592 00:21:12.277 Media bytes with metadata written: 513703936 00:21:12.277 Media bytes erased: 0 00:21:12.277 00:21:12.277 FDP events log page 00:21:12.277 =================== 00:21:12.277 Number of FDP events: 0 00:21:12.277 00:21:12.277 NVM Specific Namespace Data 00:21:12.277 =========================== 00:21:12.278 Logical Block Storage Tag Mask: 0 00:21:12.278 Protection Information Capabilities: 00:21:12.278 16b Guard Protection Information Storage Tag Support: No 00:21:12.278 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:21:12.278 Storage Tag Check Read Support: No 00:21:12.278 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.278 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.278 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.278 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.278 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.278 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.278 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.278 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:21:12.278 00:21:12.278 real 0m1.134s 00:21:12.278 user 0m0.390s 00:21:12.278 sys 0m0.511s 00:21:12.278 20:19:07 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:12.278 20:19:07 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.278 ************************************ 00:21:12.278 END TEST nvme_identify 00:21:12.278 ************************************ 00:21:12.278 20:19:07 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:21:12.278 20:19:07 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:12.278 20:19:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:12.278 20:19:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:12.278 ************************************ 00:21:12.278 START TEST nvme_perf 00:21:12.278 ************************************ 00:21:12.278 20:19:07 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:21:12.278 20:19:07 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:21:13.652 Initializing NVMe Controllers 00:21:13.652 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:13.652 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:13.652 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:13.652 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:13.652 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:13.652 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:21:13.652 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:21:13.652 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:21:13.652 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:21:13.652 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:21:13.652 Initialization complete. Launching workers. 00:21:13.652 ======================================================== 00:21:13.652 Latency(us) 00:21:13.652 Device Information : IOPS MiB/s Average min max 00:21:13.652 PCIE (0000:00:10.0) NSID 1 from core 0: 17810.67 208.72 7197.28 5864.21 24821.01 00:21:13.652 PCIE (0000:00:11.0) NSID 1 from core 0: 17810.67 208.72 7190.22 5858.66 23698.12 00:21:13.652 PCIE (0000:00:13.0) NSID 1 from core 0: 17810.67 208.72 7181.36 5899.98 23167.40 00:21:13.652 PCIE (0000:00:12.0) NSID 1 from core 0: 17810.67 208.72 7171.18 5876.41 21791.26 00:21:13.652 PCIE (0000:00:12.0) NSID 2 from core 0: 17810.67 208.72 7161.23 5907.30 20342.85 00:21:13.652 PCIE (0000:00:12.0) NSID 3 from core 0: 17810.67 208.72 7151.82 5925.38 18950.60 00:21:13.652 ======================================================== 00:21:13.652 Total : 106864.03 1252.31 7175.51 5858.66 24821.01 00:21:13.652 00:21:13.652 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:13.652 ================================================================================= 00:21:13.652 1.00000% : 6125.095us 00:21:13.652 10.00000% : 6326.745us 00:21:13.652 25.00000% : 6553.600us 00:21:13.652 50.00000% : 6906.486us 00:21:13.652 75.00000% : 7259.372us 00:21:13.652 90.00000% : 8116.382us 00:21:13.652 95.00000% : 9628.751us 00:21:13.652 98.00000% : 10939.471us 00:21:13.652 99.00000% : 11695.655us 00:21:13.652 99.50000% : 16736.886us 00:21:13.652 99.90000% : 24399.557us 00:21:13.652 99.99000% : 24802.855us 00:21:13.652 99.99900% : 24903.680us 00:21:13.652 99.99990% : 24903.680us 00:21:13.652 99.99999% : 24903.680us 00:21:13.652 00:21:13.652 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:21:13.652 ================================================================================= 00:21:13.652 1.00000% : 6175.508us 00:21:13.652 10.00000% : 6377.157us 00:21:13.652 25.00000% : 6604.012us 00:21:13.652 50.00000% : 6906.486us 00:21:13.652 75.00000% : 7259.372us 00:21:13.652 90.00000% : 8116.382us 00:21:13.652 95.00000% : 9628.751us 00:21:13.652 98.00000% : 10838.646us 00:21:13.652 99.00000% : 11594.831us 00:21:13.652 99.50000% : 16736.886us 00:21:13.652 99.90000% : 23391.311us 00:21:13.652 99.99000% : 23693.785us 00:21:13.652 99.99900% : 23794.609us 00:21:13.652 99.99990% : 23794.609us 00:21:13.652 99.99999% : 23794.609us 00:21:13.652 00:21:13.652 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:21:13.652 ================================================================================= 00:21:13.652 1.00000% : 6150.302us 00:21:13.652 10.00000% : 6351.951us 00:21:13.652 25.00000% : 6553.600us 00:21:13.652 50.00000% : 6906.486us 00:21:13.652 75.00000% : 7259.372us 00:21:13.652 90.00000% : 8166.794us 00:21:13.652 95.00000% : 9477.514us 00:21:13.652 98.00000% : 10838.646us 00:21:13.652 99.00000% : 11594.831us 00:21:13.652 99.50000% : 16333.588us 00:21:13.652 99.90000% : 22786.363us 00:21:13.652 99.99000% : 23189.662us 00:21:13.652 99.99900% : 23189.662us 00:21:13.652 99.99990% : 23189.662us 00:21:13.652 99.99999% : 23189.662us 00:21:13.652 00:21:13.652 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:21:13.652 ================================================================================= 00:21:13.652 1.00000% : 6175.508us 00:21:13.652 10.00000% : 6351.951us 00:21:13.652 25.00000% : 6553.600us 00:21:13.652 50.00000% : 6906.486us 00:21:13.652 75.00000% : 7259.372us 00:21:13.652 90.00000% : 8116.382us 00:21:13.652 95.00000% : 9477.514us 00:21:13.652 98.00000% : 10788.234us 00:21:13.652 99.00000% : 11393.182us 00:21:13.652 99.50000% : 14922.043us 00:21:13.652 99.90000% : 21374.818us 00:21:13.652 99.99000% : 21778.117us 00:21:13.652 99.99900% : 21878.942us 00:21:13.652 99.99990% : 21878.942us 00:21:13.652 99.99999% : 21878.942us 00:21:13.652 00:21:13.652 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:21:13.652 ================================================================================= 00:21:13.652 1.00000% : 6150.302us 00:21:13.652 10.00000% : 6351.951us 00:21:13.652 25.00000% : 6604.012us 00:21:13.652 50.00000% : 6906.486us 00:21:13.652 75.00000% : 7309.785us 00:21:13.652 90.00000% : 8116.382us 00:21:13.652 95.00000% : 9376.689us 00:21:13.652 98.00000% : 10737.822us 00:21:13.652 99.00000% : 11241.945us 00:21:13.652 99.50000% : 13611.323us 00:21:13.652 99.90000% : 19963.274us 00:21:13.652 99.99000% : 20366.572us 00:21:13.652 99.99900% : 20366.572us 00:21:13.652 99.99990% : 20366.572us 00:21:13.652 99.99999% : 20366.572us 00:21:13.652 00:21:13.652 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:21:13.652 ================================================================================= 00:21:13.652 1.00000% : 6175.508us 00:21:13.652 10.00000% : 6351.951us 00:21:13.652 25.00000% : 6553.600us 00:21:13.652 50.00000% : 6906.486us 00:21:13.652 75.00000% : 7309.785us 00:21:13.652 90.00000% : 8116.382us 00:21:13.652 95.00000% : 9427.102us 00:21:13.653 98.00000% : 10687.409us 00:21:13.653 99.00000% : 11393.182us 00:21:13.653 99.50000% : 12351.015us 00:21:13.653 99.90000% : 18551.729us 00:21:13.653 99.99000% : 18955.028us 00:21:13.653 99.99900% : 18955.028us 00:21:13.653 99.99990% : 18955.028us 00:21:13.653 99.99999% : 18955.028us 00:21:13.653 00:21:13.653 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:13.653 ============================================================================== 00:21:13.653 Range in us Cumulative IO count 00:21:13.653 5847.828 - 5873.034: 0.0112% ( 2) 00:21:13.653 5873.034 - 5898.240: 0.0336% ( 4) 00:21:13.653 5898.240 - 5923.446: 0.0672% ( 6) 00:21:13.653 5923.446 - 5948.652: 0.1176% ( 9) 00:21:13.653 5948.652 - 5973.858: 0.1848% ( 12) 00:21:13.653 5973.858 - 5999.065: 0.2464% ( 11) 00:21:13.653 5999.065 - 6024.271: 0.3248% ( 14) 00:21:13.653 6024.271 - 6049.477: 0.4088% ( 15) 00:21:13.653 6049.477 - 6074.683: 0.5656% ( 28) 00:21:13.653 6074.683 - 6099.889: 0.8849% ( 57) 00:21:13.653 6099.889 - 6125.095: 1.2825% ( 71) 00:21:13.653 6125.095 - 6150.302: 2.0049% ( 129) 00:21:13.653 6150.302 - 6175.508: 2.9178% ( 163) 00:21:13.653 6175.508 - 6200.714: 4.1667% ( 223) 00:21:13.653 6200.714 - 6225.920: 5.4435% ( 228) 00:21:13.653 6225.920 - 6251.126: 6.8604% ( 253) 00:21:13.653 6251.126 - 6276.332: 8.2213% ( 243) 00:21:13.653 6276.332 - 6301.538: 9.6662% ( 258) 00:21:13.653 6301.538 - 6326.745: 11.1447% ( 264) 00:21:13.653 6326.745 - 6351.951: 12.6904% ( 276) 00:21:13.653 6351.951 - 6377.157: 14.2473% ( 278) 00:21:13.653 6377.157 - 6402.363: 15.8490% ( 286) 00:21:13.653 6402.363 - 6427.569: 17.4283% ( 282) 00:21:13.653 6427.569 - 6452.775: 18.9180% ( 266) 00:21:13.653 6452.775 - 6503.188: 22.1158% ( 571) 00:21:13.653 6503.188 - 6553.600: 25.5208% ( 608) 00:21:13.653 6553.600 - 6604.012: 28.9315% ( 609) 00:21:13.653 6604.012 - 6654.425: 32.6949% ( 672) 00:21:13.653 6654.425 - 6704.837: 36.4191% ( 665) 00:21:13.653 6704.837 - 6755.249: 40.2498% ( 684) 00:21:13.653 6755.249 - 6805.662: 44.0188% ( 673) 00:21:13.653 6805.662 - 6856.074: 47.8999% ( 693) 00:21:13.653 6856.074 - 6906.486: 51.7641% ( 690) 00:21:13.653 6906.486 - 6956.898: 55.8804% ( 735) 00:21:13.653 6956.898 - 7007.311: 59.7614% ( 693) 00:21:13.653 7007.311 - 7057.723: 63.7545% ( 713) 00:21:13.653 7057.723 - 7108.135: 67.4115% ( 653) 00:21:13.653 7108.135 - 7158.548: 70.6373% ( 576) 00:21:13.653 7158.548 - 7208.960: 73.0959% ( 439) 00:21:13.653 7208.960 - 7259.372: 75.1064% ( 359) 00:21:13.653 7259.372 - 7309.785: 76.7585% ( 295) 00:21:13.653 7309.785 - 7360.197: 78.2482% ( 266) 00:21:13.653 7360.197 - 7410.609: 79.6819% ( 256) 00:21:13.653 7410.609 - 7461.022: 81.0596% ( 246) 00:21:13.653 7461.022 - 7511.434: 82.2861% ( 219) 00:21:13.653 7511.434 - 7561.846: 83.4229% ( 203) 00:21:13.653 7561.846 - 7612.258: 84.3582% ( 167) 00:21:13.653 7612.258 - 7662.671: 85.2711% ( 163) 00:21:13.653 7662.671 - 7713.083: 86.0159% ( 133) 00:21:13.653 7713.083 - 7763.495: 86.7552% ( 132) 00:21:13.653 7763.495 - 7813.908: 87.4104% ( 117) 00:21:13.653 7813.908 - 7864.320: 87.9984% ( 105) 00:21:13.653 7864.320 - 7914.732: 88.5641% ( 101) 00:21:13.653 7914.732 - 7965.145: 89.0569% ( 88) 00:21:13.653 7965.145 - 8015.557: 89.4321% ( 67) 00:21:13.653 8015.557 - 8065.969: 89.7345% ( 54) 00:21:13.653 8065.969 - 8116.382: 90.0370% ( 54) 00:21:13.653 8116.382 - 8166.794: 90.2946% ( 46) 00:21:13.653 8166.794 - 8217.206: 90.5354% ( 43) 00:21:13.653 8217.206 - 8267.618: 90.7650% ( 41) 00:21:13.653 8267.618 - 8318.031: 90.9554% ( 34) 00:21:13.653 8318.031 - 8368.443: 91.1738% ( 39) 00:21:13.653 8368.443 - 8418.855: 91.3418% ( 30) 00:21:13.653 8418.855 - 8469.268: 91.5435% ( 36) 00:21:13.653 8469.268 - 8519.680: 91.7115% ( 30) 00:21:13.653 8519.680 - 8570.092: 91.8907% ( 32) 00:21:13.653 8570.092 - 8620.505: 92.0923% ( 36) 00:21:13.653 8620.505 - 8670.917: 92.2491% ( 28) 00:21:13.653 8670.917 - 8721.329: 92.4171% ( 30) 00:21:13.653 8721.329 - 8771.742: 92.6131% ( 35) 00:21:13.653 8771.742 - 8822.154: 92.7531% ( 25) 00:21:13.653 8822.154 - 8872.566: 92.9491% ( 35) 00:21:13.653 8872.566 - 8922.978: 93.1284% ( 32) 00:21:13.653 8922.978 - 8973.391: 93.2852% ( 28) 00:21:13.653 8973.391 - 9023.803: 93.4644% ( 32) 00:21:13.653 9023.803 - 9074.215: 93.6044% ( 25) 00:21:13.653 9074.215 - 9124.628: 93.7780% ( 31) 00:21:13.653 9124.628 - 9175.040: 93.9068% ( 23) 00:21:13.653 9175.040 - 9225.452: 94.0748% ( 30) 00:21:13.653 9225.452 - 9275.865: 94.2316% ( 28) 00:21:13.653 9275.865 - 9326.277: 94.3548% ( 22) 00:21:13.653 9326.277 - 9376.689: 94.5172% ( 29) 00:21:13.653 9376.689 - 9427.102: 94.6349% ( 21) 00:21:13.653 9427.102 - 9477.514: 94.7581% ( 22) 00:21:13.653 9477.514 - 9527.926: 94.8701% ( 20) 00:21:13.653 9527.926 - 9578.338: 94.9821% ( 20) 00:21:13.653 9578.338 - 9628.751: 95.0829% ( 18) 00:21:13.653 9628.751 - 9679.163: 95.2061% ( 22) 00:21:13.653 9679.163 - 9729.575: 95.3069% ( 18) 00:21:13.653 9729.575 - 9779.988: 95.4413% ( 24) 00:21:13.653 9779.988 - 9830.400: 95.5701% ( 23) 00:21:13.653 9830.400 - 9880.812: 95.7045% ( 24) 00:21:13.653 9880.812 - 9931.225: 95.8389% ( 24) 00:21:13.653 9931.225 - 9981.637: 95.9341% ( 17) 00:21:13.653 9981.637 - 10032.049: 96.0797% ( 26) 00:21:13.653 10032.049 - 10082.462: 96.2310% ( 27) 00:21:13.653 10082.462 - 10132.874: 96.3710% ( 25) 00:21:13.653 10132.874 - 10183.286: 96.4998% ( 23) 00:21:13.653 10183.286 - 10233.698: 96.6286% ( 23) 00:21:13.653 10233.698 - 10284.111: 96.7518% ( 22) 00:21:13.653 10284.111 - 10334.523: 96.8582% ( 19) 00:21:13.653 10334.523 - 10384.935: 96.9870% ( 23) 00:21:13.653 10384.935 - 10435.348: 97.1102% ( 22) 00:21:13.653 10435.348 - 10485.760: 97.2390% ( 23) 00:21:13.653 10485.760 - 10536.172: 97.3230% ( 15) 00:21:13.653 10536.172 - 10586.585: 97.4014% ( 14) 00:21:13.653 10586.585 - 10636.997: 97.5022% ( 18) 00:21:13.653 10636.997 - 10687.409: 97.5918% ( 16) 00:21:13.653 10687.409 - 10737.822: 97.6871% ( 17) 00:21:13.653 10737.822 - 10788.234: 97.7767% ( 16) 00:21:13.653 10788.234 - 10838.646: 97.8607% ( 15) 00:21:13.653 10838.646 - 10889.058: 97.9615% ( 18) 00:21:13.653 10889.058 - 10939.471: 98.0287% ( 12) 00:21:13.653 10939.471 - 10989.883: 98.1239% ( 17) 00:21:13.653 10989.883 - 11040.295: 98.1967% ( 13) 00:21:13.653 11040.295 - 11090.708: 98.2751% ( 14) 00:21:13.653 11090.708 - 11141.120: 98.3535% ( 14) 00:21:13.653 11141.120 - 11191.532: 98.4151% ( 11) 00:21:13.653 11191.532 - 11241.945: 98.4879% ( 13) 00:21:13.653 11241.945 - 11292.357: 98.5607% ( 13) 00:21:13.653 11292.357 - 11342.769: 98.6223% ( 11) 00:21:13.653 11342.769 - 11393.182: 98.6951% ( 13) 00:21:13.653 11393.182 - 11443.594: 98.7511% ( 10) 00:21:13.653 11443.594 - 11494.006: 98.8295% ( 14) 00:21:13.653 11494.006 - 11544.418: 98.8799% ( 9) 00:21:13.653 11544.418 - 11594.831: 98.9471% ( 12) 00:21:13.653 11594.831 - 11645.243: 98.9975% ( 9) 00:21:13.653 11645.243 - 11695.655: 99.0423% ( 8) 00:21:13.653 11695.655 - 11746.068: 99.0927% ( 9) 00:21:13.653 11746.068 - 11796.480: 99.1375% ( 8) 00:21:13.653 11796.480 - 11846.892: 99.1711% ( 6) 00:21:13.653 11846.892 - 11897.305: 99.1879% ( 3) 00:21:13.653 11897.305 - 11947.717: 99.2047% ( 3) 00:21:13.653 11947.717 - 11998.129: 99.2216% ( 3) 00:21:13.653 11998.129 - 12048.542: 99.2328% ( 2) 00:21:13.653 12048.542 - 12098.954: 99.2496% ( 3) 00:21:13.653 12098.954 - 12149.366: 99.2552% ( 1) 00:21:13.653 12149.366 - 12199.778: 99.2608% ( 1) 00:21:13.653 12199.778 - 12250.191: 99.2776% ( 3) 00:21:13.653 12300.603 - 12351.015: 99.2832% ( 1) 00:21:13.653 15627.815 - 15728.640: 99.2888% ( 1) 00:21:13.653 15728.640 - 15829.465: 99.3000% ( 2) 00:21:13.653 15829.465 - 15930.289: 99.3168% ( 3) 00:21:13.653 15930.289 - 16031.114: 99.3392% ( 4) 00:21:13.653 16031.114 - 16131.938: 99.3728% ( 6) 00:21:13.653 16131.938 - 16232.763: 99.3952% ( 4) 00:21:13.653 16232.763 - 16333.588: 99.4232% ( 5) 00:21:13.653 16333.588 - 16434.412: 99.4400% ( 3) 00:21:13.653 16434.412 - 16535.237: 99.4680% ( 5) 00:21:13.653 16535.237 - 16636.062: 99.4960% ( 5) 00:21:13.653 16636.062 - 16736.886: 99.5240% ( 5) 00:21:13.653 16736.886 - 16837.711: 99.5520% ( 5) 00:21:13.653 16837.711 - 16938.535: 99.5800% ( 5) 00:21:13.653 16938.535 - 17039.360: 99.6024% ( 4) 00:21:13.653 17039.360 - 17140.185: 99.6360% ( 6) 00:21:13.653 17140.185 - 17241.009: 99.6416% ( 1) 00:21:13.653 23088.837 - 23189.662: 99.6584% ( 3) 00:21:13.653 23189.662 - 23290.486: 99.6752% ( 3) 00:21:13.653 23290.486 - 23391.311: 99.6976% ( 4) 00:21:13.653 23391.311 - 23492.135: 99.7200% ( 4) 00:21:13.653 23492.135 - 23592.960: 99.7368% ( 3) 00:21:13.653 23592.960 - 23693.785: 99.7592% ( 4) 00:21:13.653 23693.785 - 23794.609: 99.7760% ( 3) 00:21:13.653 23794.609 - 23895.434: 99.7984% ( 4) 00:21:13.653 23895.434 - 23996.258: 99.8208% ( 4) 00:21:13.653 23996.258 - 24097.083: 99.8488% ( 5) 00:21:13.653 24097.083 - 24197.908: 99.8656% ( 3) 00:21:13.653 24197.908 - 24298.732: 99.8880% ( 4) 00:21:13.653 24298.732 - 24399.557: 99.9048% ( 3) 00:21:13.653 24399.557 - 24500.382: 99.9272% ( 4) 00:21:13.653 24500.382 - 24601.206: 99.9496% ( 4) 00:21:13.653 24601.206 - 24702.031: 99.9664% ( 3) 00:21:13.653 24702.031 - 24802.855: 99.9944% ( 5) 00:21:13.653 24802.855 - 24903.680: 100.0000% ( 1) 00:21:13.653 00:21:13.653 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:21:13.653 ============================================================================== 00:21:13.653 Range in us Cumulative IO count 00:21:13.654 5847.828 - 5873.034: 0.0224% ( 4) 00:21:13.654 5873.034 - 5898.240: 0.0392% ( 3) 00:21:13.654 5898.240 - 5923.446: 0.0504% ( 2) 00:21:13.654 5923.446 - 5948.652: 0.0672% ( 3) 00:21:13.654 5948.652 - 5973.858: 0.1008% ( 6) 00:21:13.654 5973.858 - 5999.065: 0.1680% ( 12) 00:21:13.654 5999.065 - 6024.271: 0.2408% ( 13) 00:21:13.654 6024.271 - 6049.477: 0.3080% ( 12) 00:21:13.654 6049.477 - 6074.683: 0.3920% ( 15) 00:21:13.654 6074.683 - 6099.889: 0.4704% ( 14) 00:21:13.654 6099.889 - 6125.095: 0.5880% ( 21) 00:21:13.654 6125.095 - 6150.302: 0.7560% ( 30) 00:21:13.654 6150.302 - 6175.508: 1.0977% ( 61) 00:21:13.654 6175.508 - 6200.714: 1.6353% ( 96) 00:21:13.654 6200.714 - 6225.920: 2.4194% ( 140) 00:21:13.654 6225.920 - 6251.126: 3.3770% ( 171) 00:21:13.654 6251.126 - 6276.332: 4.6819% ( 233) 00:21:13.654 6276.332 - 6301.538: 6.0148% ( 238) 00:21:13.654 6301.538 - 6326.745: 7.7229% ( 305) 00:21:13.654 6326.745 - 6351.951: 9.4030% ( 300) 00:21:13.654 6351.951 - 6377.157: 11.0327% ( 291) 00:21:13.654 6377.157 - 6402.363: 12.9368% ( 340) 00:21:13.654 6402.363 - 6427.569: 14.6785% ( 311) 00:21:13.654 6427.569 - 6452.775: 16.5603% ( 336) 00:21:13.654 6452.775 - 6503.188: 20.2397% ( 657) 00:21:13.654 6503.188 - 6553.600: 24.0479% ( 680) 00:21:13.654 6553.600 - 6604.012: 27.9626% ( 699) 00:21:13.654 6604.012 - 6654.425: 32.0957% ( 738) 00:21:13.654 6654.425 - 6704.837: 36.2903% ( 749) 00:21:13.654 6704.837 - 6755.249: 40.6418% ( 777) 00:21:13.654 6755.249 - 6805.662: 45.0157% ( 781) 00:21:13.654 6805.662 - 6856.074: 49.4120% ( 785) 00:21:13.654 6856.074 - 6906.486: 53.8194% ( 787) 00:21:13.654 6906.486 - 6956.898: 58.3669% ( 812) 00:21:13.654 6956.898 - 7007.311: 62.7072% ( 775) 00:21:13.654 7007.311 - 7057.723: 66.5043% ( 678) 00:21:13.654 7057.723 - 7108.135: 69.4500% ( 526) 00:21:13.654 7108.135 - 7158.548: 71.9814% ( 452) 00:21:13.654 7158.548 - 7208.960: 74.0479% ( 369) 00:21:13.654 7208.960 - 7259.372: 75.8905% ( 329) 00:21:13.654 7259.372 - 7309.785: 77.4754% ( 283) 00:21:13.654 7309.785 - 7360.197: 79.0155% ( 275) 00:21:13.654 7360.197 - 7410.609: 80.5444% ( 273) 00:21:13.654 7410.609 - 7461.022: 81.9836% ( 257) 00:21:13.654 7461.022 - 7511.434: 83.1653% ( 211) 00:21:13.654 7511.434 - 7561.846: 84.2518% ( 194) 00:21:13.654 7561.846 - 7612.258: 85.1478% ( 160) 00:21:13.654 7612.258 - 7662.671: 85.9375% ( 141) 00:21:13.654 7662.671 - 7713.083: 86.7552% ( 146) 00:21:13.654 7713.083 - 7763.495: 87.4440% ( 123) 00:21:13.654 7763.495 - 7813.908: 88.0208% ( 103) 00:21:13.654 7813.908 - 7864.320: 88.5249% ( 90) 00:21:13.654 7864.320 - 7914.732: 88.9785% ( 81) 00:21:13.654 7914.732 - 7965.145: 89.3257% ( 62) 00:21:13.654 7965.145 - 8015.557: 89.5609% ( 42) 00:21:13.654 8015.557 - 8065.969: 89.7961% ( 42) 00:21:13.654 8065.969 - 8116.382: 90.0202% ( 40) 00:21:13.654 8116.382 - 8166.794: 90.2274% ( 37) 00:21:13.654 8166.794 - 8217.206: 90.4458% ( 39) 00:21:13.654 8217.206 - 8267.618: 90.6810% ( 42) 00:21:13.654 8267.618 - 8318.031: 90.8602% ( 32) 00:21:13.654 8318.031 - 8368.443: 91.0506% ( 34) 00:21:13.654 8368.443 - 8418.855: 91.2074% ( 28) 00:21:13.654 8418.855 - 8469.268: 91.3698% ( 29) 00:21:13.654 8469.268 - 8519.680: 91.5323% ( 29) 00:21:13.654 8519.680 - 8570.092: 91.7339% ( 36) 00:21:13.654 8570.092 - 8620.505: 91.9691% ( 42) 00:21:13.654 8620.505 - 8670.917: 92.1875% ( 39) 00:21:13.654 8670.917 - 8721.329: 92.4227% ( 42) 00:21:13.654 8721.329 - 8771.742: 92.6411% ( 39) 00:21:13.654 8771.742 - 8822.154: 92.8763% ( 42) 00:21:13.654 8822.154 - 8872.566: 93.1004% ( 40) 00:21:13.654 8872.566 - 8922.978: 93.3244% ( 40) 00:21:13.654 8922.978 - 8973.391: 93.5148% ( 34) 00:21:13.654 8973.391 - 9023.803: 93.6996% ( 33) 00:21:13.654 9023.803 - 9074.215: 93.8452% ( 26) 00:21:13.654 9074.215 - 9124.628: 93.9908% ( 26) 00:21:13.654 9124.628 - 9175.040: 94.0972% ( 19) 00:21:13.654 9175.040 - 9225.452: 94.2148% ( 21) 00:21:13.654 9225.452 - 9275.865: 94.3380% ( 22) 00:21:13.654 9275.865 - 9326.277: 94.5004% ( 29) 00:21:13.654 9326.277 - 9376.689: 94.6573% ( 28) 00:21:13.654 9376.689 - 9427.102: 94.7749% ( 21) 00:21:13.654 9427.102 - 9477.514: 94.8365% ( 11) 00:21:13.654 9477.514 - 9527.926: 94.8925% ( 10) 00:21:13.654 9527.926 - 9578.338: 94.9821% ( 16) 00:21:13.654 9578.338 - 9628.751: 95.0773% ( 17) 00:21:13.654 9628.751 - 9679.163: 95.1557% ( 14) 00:21:13.654 9679.163 - 9729.575: 95.2453% ( 16) 00:21:13.654 9729.575 - 9779.988: 95.3461% ( 18) 00:21:13.654 9779.988 - 9830.400: 95.4581% ( 20) 00:21:13.654 9830.400 - 9880.812: 95.5757% ( 21) 00:21:13.654 9880.812 - 9931.225: 95.6989% ( 22) 00:21:13.654 9931.225 - 9981.637: 95.8277% ( 23) 00:21:13.654 9981.637 - 10032.049: 95.9565% ( 23) 00:21:13.654 10032.049 - 10082.462: 96.1246% ( 30) 00:21:13.654 10082.462 - 10132.874: 96.2646% ( 25) 00:21:13.654 10132.874 - 10183.286: 96.4326% ( 30) 00:21:13.654 10183.286 - 10233.698: 96.5670% ( 24) 00:21:13.654 10233.698 - 10284.111: 96.7126% ( 26) 00:21:13.654 10284.111 - 10334.523: 96.8470% ( 24) 00:21:13.654 10334.523 - 10384.935: 96.9870% ( 25) 00:21:13.654 10384.935 - 10435.348: 97.1214% ( 24) 00:21:13.654 10435.348 - 10485.760: 97.2334% ( 20) 00:21:13.654 10485.760 - 10536.172: 97.3510% ( 21) 00:21:13.654 10536.172 - 10586.585: 97.4686% ( 21) 00:21:13.654 10586.585 - 10636.997: 97.5862% ( 21) 00:21:13.654 10636.997 - 10687.409: 97.6983% ( 20) 00:21:13.654 10687.409 - 10737.822: 97.8215% ( 22) 00:21:13.654 10737.822 - 10788.234: 97.9503% ( 23) 00:21:13.654 10788.234 - 10838.646: 98.0511% ( 18) 00:21:13.654 10838.646 - 10889.058: 98.1519% ( 18) 00:21:13.654 10889.058 - 10939.471: 98.2415% ( 16) 00:21:13.654 10939.471 - 10989.883: 98.3143% ( 13) 00:21:13.654 10989.883 - 11040.295: 98.3703% ( 10) 00:21:13.654 11040.295 - 11090.708: 98.4431% ( 13) 00:21:13.654 11090.708 - 11141.120: 98.5215% ( 14) 00:21:13.654 11141.120 - 11191.532: 98.5719% ( 9) 00:21:13.654 11191.532 - 11241.945: 98.6279% ( 10) 00:21:13.654 11241.945 - 11292.357: 98.6783% ( 9) 00:21:13.654 11292.357 - 11342.769: 98.7343% ( 10) 00:21:13.654 11342.769 - 11393.182: 98.7903% ( 10) 00:21:13.654 11393.182 - 11443.594: 98.8519% ( 11) 00:21:13.654 11443.594 - 11494.006: 98.9079% ( 10) 00:21:13.654 11494.006 - 11544.418: 98.9639% ( 10) 00:21:13.654 11544.418 - 11594.831: 99.0255% ( 11) 00:21:13.654 11594.831 - 11645.243: 99.0703% ( 8) 00:21:13.654 11645.243 - 11695.655: 99.1319% ( 11) 00:21:13.654 11695.655 - 11746.068: 99.1935% ( 11) 00:21:13.654 11746.068 - 11796.480: 99.2328% ( 7) 00:21:13.654 11796.480 - 11846.892: 99.2720% ( 7) 00:21:13.654 11846.892 - 11897.305: 99.2832% ( 2) 00:21:13.654 15728.640 - 15829.465: 99.2888% ( 1) 00:21:13.654 15829.465 - 15930.289: 99.3112% ( 4) 00:21:13.654 15930.289 - 16031.114: 99.3336% ( 4) 00:21:13.654 16031.114 - 16131.938: 99.3504% ( 3) 00:21:13.654 16131.938 - 16232.763: 99.3784% ( 5) 00:21:13.654 16232.763 - 16333.588: 99.4120% ( 6) 00:21:13.654 16333.588 - 16434.412: 99.4400% ( 5) 00:21:13.654 16434.412 - 16535.237: 99.4680% ( 5) 00:21:13.654 16535.237 - 16636.062: 99.4960% ( 5) 00:21:13.654 16636.062 - 16736.886: 99.5240% ( 5) 00:21:13.654 16736.886 - 16837.711: 99.5464% ( 4) 00:21:13.654 16837.711 - 16938.535: 99.5744% ( 5) 00:21:13.654 16938.535 - 17039.360: 99.6080% ( 6) 00:21:13.654 17039.360 - 17140.185: 99.6360% ( 5) 00:21:13.654 17140.185 - 17241.009: 99.6416% ( 1) 00:21:13.654 22080.591 - 22181.415: 99.6528% ( 2) 00:21:13.654 22181.415 - 22282.240: 99.6752% ( 4) 00:21:13.654 22282.240 - 22383.065: 99.6976% ( 4) 00:21:13.654 22383.065 - 22483.889: 99.7144% ( 3) 00:21:13.654 22483.889 - 22584.714: 99.7368% ( 4) 00:21:13.654 22584.714 - 22685.538: 99.7592% ( 4) 00:21:13.654 22685.538 - 22786.363: 99.7872% ( 5) 00:21:13.654 22786.363 - 22887.188: 99.8096% ( 4) 00:21:13.654 22887.188 - 22988.012: 99.8320% ( 4) 00:21:13.654 22988.012 - 23088.837: 99.8544% ( 4) 00:21:13.654 23088.837 - 23189.662: 99.8768% ( 4) 00:21:13.654 23189.662 - 23290.486: 99.8992% ( 4) 00:21:13.654 23290.486 - 23391.311: 99.9272% ( 5) 00:21:13.654 23391.311 - 23492.135: 99.9496% ( 4) 00:21:13.654 23492.135 - 23592.960: 99.9720% ( 4) 00:21:13.654 23592.960 - 23693.785: 99.9944% ( 4) 00:21:13.654 23693.785 - 23794.609: 100.0000% ( 1) 00:21:13.654 00:21:13.654 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:21:13.654 ============================================================================== 00:21:13.654 Range in us Cumulative IO count 00:21:13.654 5898.240 - 5923.446: 0.0280% ( 5) 00:21:13.654 5923.446 - 5948.652: 0.0560% ( 5) 00:21:13.654 5948.652 - 5973.858: 0.0840% ( 5) 00:21:13.654 5973.858 - 5999.065: 0.2072% ( 22) 00:21:13.654 5999.065 - 6024.271: 0.2576% ( 9) 00:21:13.654 6024.271 - 6049.477: 0.3136% ( 10) 00:21:13.654 6049.477 - 6074.683: 0.3976% ( 15) 00:21:13.654 6074.683 - 6099.889: 0.4872% ( 16) 00:21:13.654 6099.889 - 6125.095: 0.7056% ( 39) 00:21:13.654 6125.095 - 6150.302: 1.2433% ( 96) 00:21:13.654 6150.302 - 6175.508: 1.9377% ( 124) 00:21:13.654 6175.508 - 6200.714: 2.8842% ( 169) 00:21:13.654 6200.714 - 6225.920: 4.0603% ( 210) 00:21:13.654 6225.920 - 6251.126: 5.4043% ( 240) 00:21:13.654 6251.126 - 6276.332: 6.7372% ( 238) 00:21:13.654 6276.332 - 6301.538: 8.2493% ( 270) 00:21:13.654 6301.538 - 6326.745: 9.8398% ( 284) 00:21:13.654 6326.745 - 6351.951: 11.5255% ( 301) 00:21:13.654 6351.951 - 6377.157: 13.2168% ( 302) 00:21:13.654 6377.157 - 6402.363: 14.9250% ( 305) 00:21:13.654 6402.363 - 6427.569: 16.6051% ( 300) 00:21:13.654 6427.569 - 6452.775: 18.2908% ( 301) 00:21:13.655 6452.775 - 6503.188: 21.8918% ( 643) 00:21:13.655 6503.188 - 6553.600: 25.3808% ( 623) 00:21:13.655 6553.600 - 6604.012: 29.1387% ( 671) 00:21:13.655 6604.012 - 6654.425: 32.8461% ( 662) 00:21:13.655 6654.425 - 6704.837: 36.7552% ( 698) 00:21:13.655 6704.837 - 6755.249: 40.7034% ( 705) 00:21:13.655 6755.249 - 6805.662: 44.6797% ( 710) 00:21:13.655 6805.662 - 6856.074: 48.8295% ( 741) 00:21:13.655 6856.074 - 6906.486: 52.9794% ( 741) 00:21:13.655 6906.486 - 6956.898: 57.1125% ( 738) 00:21:13.655 6956.898 - 7007.311: 61.3239% ( 752) 00:21:13.655 7007.311 - 7057.723: 65.3226% ( 714) 00:21:13.655 7057.723 - 7108.135: 68.9180% ( 642) 00:21:13.655 7108.135 - 7158.548: 71.6846% ( 494) 00:21:13.655 7158.548 - 7208.960: 73.8463% ( 386) 00:21:13.655 7208.960 - 7259.372: 75.5096% ( 297) 00:21:13.655 7259.372 - 7309.785: 76.8873% ( 246) 00:21:13.655 7309.785 - 7360.197: 78.2370% ( 241) 00:21:13.655 7360.197 - 7410.609: 79.5755% ( 239) 00:21:13.655 7410.609 - 7461.022: 80.8188% ( 222) 00:21:13.655 7461.022 - 7511.434: 82.0228% ( 215) 00:21:13.655 7511.434 - 7561.846: 83.1821% ( 207) 00:21:13.655 7561.846 - 7612.258: 84.2182% ( 185) 00:21:13.655 7612.258 - 7662.671: 85.0918% ( 156) 00:21:13.655 7662.671 - 7713.083: 85.8591% ( 137) 00:21:13.655 7713.083 - 7763.495: 86.6375% ( 139) 00:21:13.655 7763.495 - 7813.908: 87.2928% ( 117) 00:21:13.655 7813.908 - 7864.320: 87.8192% ( 94) 00:21:13.655 7864.320 - 7914.732: 88.3009% ( 86) 00:21:13.655 7914.732 - 7965.145: 88.7153% ( 74) 00:21:13.655 7965.145 - 8015.557: 89.0737% ( 64) 00:21:13.655 8015.557 - 8065.969: 89.4825% ( 73) 00:21:13.655 8065.969 - 8116.382: 89.8129% ( 59) 00:21:13.655 8116.382 - 8166.794: 90.1546% ( 61) 00:21:13.655 8166.794 - 8217.206: 90.5074% ( 63) 00:21:13.655 8217.206 - 8267.618: 90.7986% ( 52) 00:21:13.655 8267.618 - 8318.031: 91.0562% ( 46) 00:21:13.655 8318.031 - 8368.443: 91.2802% ( 40) 00:21:13.655 8368.443 - 8418.855: 91.4819% ( 36) 00:21:13.655 8418.855 - 8469.268: 91.7003% ( 39) 00:21:13.655 8469.268 - 8519.680: 91.8795% ( 32) 00:21:13.655 8519.680 - 8570.092: 92.0587% ( 32) 00:21:13.655 8570.092 - 8620.505: 92.2547% ( 35) 00:21:13.655 8620.505 - 8670.917: 92.4395% ( 33) 00:21:13.655 8670.917 - 8721.329: 92.6355% ( 35) 00:21:13.655 8721.329 - 8771.742: 92.8091% ( 31) 00:21:13.655 8771.742 - 8822.154: 92.9828% ( 31) 00:21:13.655 8822.154 - 8872.566: 93.1676% ( 33) 00:21:13.655 8872.566 - 8922.978: 93.3356% ( 30) 00:21:13.655 8922.978 - 8973.391: 93.5204% ( 33) 00:21:13.655 8973.391 - 9023.803: 93.6996% ( 32) 00:21:13.655 9023.803 - 9074.215: 93.8732% ( 31) 00:21:13.655 9074.215 - 9124.628: 94.0356% ( 29) 00:21:13.655 9124.628 - 9175.040: 94.1476% ( 20) 00:21:13.655 9175.040 - 9225.452: 94.2876% ( 25) 00:21:13.655 9225.452 - 9275.865: 94.4444% ( 28) 00:21:13.655 9275.865 - 9326.277: 94.6069% ( 29) 00:21:13.655 9326.277 - 9376.689: 94.7861% ( 32) 00:21:13.655 9376.689 - 9427.102: 94.9317% ( 26) 00:21:13.655 9427.102 - 9477.514: 95.0493% ( 21) 00:21:13.655 9477.514 - 9527.926: 95.1893% ( 25) 00:21:13.655 9527.926 - 9578.338: 95.3069% ( 21) 00:21:13.655 9578.338 - 9628.751: 95.4357% ( 23) 00:21:13.655 9628.751 - 9679.163: 95.5589% ( 22) 00:21:13.655 9679.163 - 9729.575: 95.6933% ( 24) 00:21:13.655 9729.575 - 9779.988: 95.8109% ( 21) 00:21:13.655 9779.988 - 9830.400: 95.9173% ( 19) 00:21:13.655 9830.400 - 9880.812: 96.0349% ( 21) 00:21:13.655 9880.812 - 9931.225: 96.1806% ( 26) 00:21:13.655 9931.225 - 9981.637: 96.3318% ( 27) 00:21:13.655 9981.637 - 10032.049: 96.4774% ( 26) 00:21:13.655 10032.049 - 10082.462: 96.6118% ( 24) 00:21:13.655 10082.462 - 10132.874: 96.7406% ( 23) 00:21:13.655 10132.874 - 10183.286: 96.8918% ( 27) 00:21:13.655 10183.286 - 10233.698: 97.0206% ( 23) 00:21:13.655 10233.698 - 10284.111: 97.1606% ( 25) 00:21:13.655 10284.111 - 10334.523: 97.2670% ( 19) 00:21:13.655 10334.523 - 10384.935: 97.3790% ( 20) 00:21:13.655 10384.935 - 10435.348: 97.4798% ( 18) 00:21:13.655 10435.348 - 10485.760: 97.5582% ( 14) 00:21:13.655 10485.760 - 10536.172: 97.6422% ( 15) 00:21:13.655 10536.172 - 10586.585: 97.7487% ( 19) 00:21:13.655 10586.585 - 10636.997: 97.8215% ( 13) 00:21:13.655 10636.997 - 10687.409: 97.8719% ( 9) 00:21:13.655 10687.409 - 10737.822: 97.9335% ( 11) 00:21:13.655 10737.822 - 10788.234: 97.9839% ( 9) 00:21:13.655 10788.234 - 10838.646: 98.0511% ( 12) 00:21:13.655 10838.646 - 10889.058: 98.1071% ( 10) 00:21:13.655 10889.058 - 10939.471: 98.1799% ( 13) 00:21:13.655 10939.471 - 10989.883: 98.2471% ( 12) 00:21:13.655 10989.883 - 11040.295: 98.3143% ( 12) 00:21:13.655 11040.295 - 11090.708: 98.3759% ( 11) 00:21:13.655 11090.708 - 11141.120: 98.4487% ( 13) 00:21:13.655 11141.120 - 11191.532: 98.5159% ( 12) 00:21:13.655 11191.532 - 11241.945: 98.6055% ( 16) 00:21:13.655 11241.945 - 11292.357: 98.6783% ( 13) 00:21:13.655 11292.357 - 11342.769: 98.7455% ( 12) 00:21:13.655 11342.769 - 11393.182: 98.8183% ( 13) 00:21:13.655 11393.182 - 11443.594: 98.8799% ( 11) 00:21:13.655 11443.594 - 11494.006: 98.9359% ( 10) 00:21:13.655 11494.006 - 11544.418: 98.9863% ( 9) 00:21:13.655 11544.418 - 11594.831: 99.0423% ( 10) 00:21:13.655 11594.831 - 11645.243: 99.0983% ( 10) 00:21:13.655 11645.243 - 11695.655: 99.1319% ( 6) 00:21:13.655 11695.655 - 11746.068: 99.1767% ( 8) 00:21:13.655 11746.068 - 11796.480: 99.2047% ( 5) 00:21:13.655 11796.480 - 11846.892: 99.2272% ( 4) 00:21:13.655 11846.892 - 11897.305: 99.2496% ( 4) 00:21:13.655 11897.305 - 11947.717: 99.2720% ( 4) 00:21:13.655 11947.717 - 11998.129: 99.2832% ( 2) 00:21:13.655 15224.517 - 15325.342: 99.3000% ( 3) 00:21:13.655 15325.342 - 15426.166: 99.3224% ( 4) 00:21:13.655 15426.166 - 15526.991: 99.3336% ( 2) 00:21:13.655 15526.991 - 15627.815: 99.3560% ( 4) 00:21:13.655 15627.815 - 15728.640: 99.3728% ( 3) 00:21:13.655 15728.640 - 15829.465: 99.4064% ( 6) 00:21:13.655 15829.465 - 15930.289: 99.4232% ( 3) 00:21:13.655 15930.289 - 16031.114: 99.4400% ( 3) 00:21:13.655 16031.114 - 16131.938: 99.4624% ( 4) 00:21:13.655 16131.938 - 16232.763: 99.4848% ( 4) 00:21:13.655 16232.763 - 16333.588: 99.5072% ( 4) 00:21:13.655 16333.588 - 16434.412: 99.5240% ( 3) 00:21:13.655 16434.412 - 16535.237: 99.5464% ( 4) 00:21:13.655 16535.237 - 16636.062: 99.5688% ( 4) 00:21:13.655 16636.062 - 16736.886: 99.5856% ( 3) 00:21:13.655 16736.886 - 16837.711: 99.6024% ( 3) 00:21:13.655 16837.711 - 16938.535: 99.6248% ( 4) 00:21:13.655 16938.535 - 17039.360: 99.6416% ( 3) 00:21:13.655 21273.994 - 21374.818: 99.6472% ( 1) 00:21:13.655 21374.818 - 21475.643: 99.6640% ( 3) 00:21:13.655 21475.643 - 21576.468: 99.6808% ( 3) 00:21:13.655 21576.468 - 21677.292: 99.6976% ( 3) 00:21:13.655 21677.292 - 21778.117: 99.7200% ( 4) 00:21:13.655 21778.117 - 21878.942: 99.7368% ( 3) 00:21:13.655 21878.942 - 21979.766: 99.7536% ( 3) 00:21:13.655 21979.766 - 22080.591: 99.7760% ( 4) 00:21:13.655 22080.591 - 22181.415: 99.7928% ( 3) 00:21:13.655 22181.415 - 22282.240: 99.8152% ( 4) 00:21:13.655 22282.240 - 22383.065: 99.8320% ( 3) 00:21:13.655 22383.065 - 22483.889: 99.8544% ( 4) 00:21:13.655 22483.889 - 22584.714: 99.8712% ( 3) 00:21:13.655 22584.714 - 22685.538: 99.8936% ( 4) 00:21:13.655 22685.538 - 22786.363: 99.9160% ( 4) 00:21:13.655 22786.363 - 22887.188: 99.9384% ( 4) 00:21:13.655 22887.188 - 22988.012: 99.9608% ( 4) 00:21:13.655 22988.012 - 23088.837: 99.9832% ( 4) 00:21:13.655 23088.837 - 23189.662: 100.0000% ( 3) 00:21:13.655 00:21:13.655 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:21:13.655 ============================================================================== 00:21:13.655 Range in us Cumulative IO count 00:21:13.655 5873.034 - 5898.240: 0.0224% ( 4) 00:21:13.655 5898.240 - 5923.446: 0.0280% ( 1) 00:21:13.655 5923.446 - 5948.652: 0.0616% ( 6) 00:21:13.655 5948.652 - 5973.858: 0.1176% ( 10) 00:21:13.655 5973.858 - 5999.065: 0.1624% ( 8) 00:21:13.655 5999.065 - 6024.271: 0.1904% ( 5) 00:21:13.655 6024.271 - 6049.477: 0.2184% ( 5) 00:21:13.655 6049.477 - 6074.683: 0.2800% ( 11) 00:21:13.655 6074.683 - 6099.889: 0.3528% ( 13) 00:21:13.655 6099.889 - 6125.095: 0.6216% ( 48) 00:21:13.655 6125.095 - 6150.302: 0.9689% ( 62) 00:21:13.655 6150.302 - 6175.508: 1.4505% ( 86) 00:21:13.655 6175.508 - 6200.714: 2.4698% ( 182) 00:21:13.655 6200.714 - 6225.920: 3.4778% ( 180) 00:21:13.655 6225.920 - 6251.126: 4.7547% ( 228) 00:21:13.655 6251.126 - 6276.332: 6.3004% ( 276) 00:21:13.655 6276.332 - 6301.538: 7.9749% ( 299) 00:21:13.655 6301.538 - 6326.745: 9.6606% ( 301) 00:21:13.655 6326.745 - 6351.951: 11.5927% ( 345) 00:21:13.655 6351.951 - 6377.157: 13.3513% ( 314) 00:21:13.655 6377.157 - 6402.363: 15.0034% ( 295) 00:21:13.655 6402.363 - 6427.569: 16.7339% ( 309) 00:21:13.655 6427.569 - 6452.775: 18.4588% ( 308) 00:21:13.655 6452.775 - 6503.188: 21.7630% ( 590) 00:21:13.655 6503.188 - 6553.600: 25.0280% ( 583) 00:21:13.655 6553.600 - 6604.012: 28.6178% ( 641) 00:21:13.655 6604.012 - 6654.425: 32.3533% ( 667) 00:21:13.655 6654.425 - 6704.837: 36.1951% ( 686) 00:21:13.655 6704.837 - 6755.249: 40.0874% ( 695) 00:21:13.655 6755.249 - 6805.662: 44.1252% ( 721) 00:21:13.655 6805.662 - 6856.074: 48.2303% ( 733) 00:21:13.655 6856.074 - 6906.486: 52.3802% ( 741) 00:21:13.655 6906.486 - 6956.898: 56.7260% ( 776) 00:21:13.655 6956.898 - 7007.311: 60.9543% ( 755) 00:21:13.655 7007.311 - 7057.723: 65.0650% ( 734) 00:21:13.655 7057.723 - 7108.135: 68.5988% ( 631) 00:21:13.655 7108.135 - 7158.548: 71.3150% ( 485) 00:21:13.655 7158.548 - 7208.960: 73.5159% ( 393) 00:21:13.655 7208.960 - 7259.372: 75.2128% ( 303) 00:21:13.655 7259.372 - 7309.785: 76.6857% ( 263) 00:21:13.655 7309.785 - 7360.197: 78.0690% ( 247) 00:21:13.656 7360.197 - 7410.609: 79.4691% ( 250) 00:21:13.656 7410.609 - 7461.022: 80.7068% ( 221) 00:21:13.656 7461.022 - 7511.434: 82.0565% ( 241) 00:21:13.656 7511.434 - 7561.846: 83.2325% ( 210) 00:21:13.656 7561.846 - 7612.258: 84.2686% ( 185) 00:21:13.656 7612.258 - 7662.671: 85.1871% ( 164) 00:21:13.656 7662.671 - 7713.083: 86.0775% ( 159) 00:21:13.656 7713.083 - 7763.495: 86.9176% ( 150) 00:21:13.656 7763.495 - 7813.908: 87.6176% ( 125) 00:21:13.656 7813.908 - 7864.320: 88.2224% ( 108) 00:21:13.656 7864.320 - 7914.732: 88.7377% ( 92) 00:21:13.656 7914.732 - 7965.145: 89.1745% ( 78) 00:21:13.656 7965.145 - 8015.557: 89.5105% ( 60) 00:21:13.656 8015.557 - 8065.969: 89.8409% ( 59) 00:21:13.656 8065.969 - 8116.382: 90.1658% ( 58) 00:21:13.656 8116.382 - 8166.794: 90.4962% ( 59) 00:21:13.656 8166.794 - 8217.206: 90.7538% ( 46) 00:21:13.656 8217.206 - 8267.618: 91.0002% ( 44) 00:21:13.656 8267.618 - 8318.031: 91.2578% ( 46) 00:21:13.656 8318.031 - 8368.443: 91.5211% ( 47) 00:21:13.656 8368.443 - 8418.855: 91.7339% ( 38) 00:21:13.656 8418.855 - 8469.268: 91.9243% ( 34) 00:21:13.656 8469.268 - 8519.680: 92.1035% ( 32) 00:21:13.656 8519.680 - 8570.092: 92.2435% ( 25) 00:21:13.656 8570.092 - 8620.505: 92.4227% ( 32) 00:21:13.656 8620.505 - 8670.917: 92.6187% ( 35) 00:21:13.656 8670.917 - 8721.329: 92.7979% ( 32) 00:21:13.656 8721.329 - 8771.742: 92.9940% ( 35) 00:21:13.656 8771.742 - 8822.154: 93.1396% ( 26) 00:21:13.656 8822.154 - 8872.566: 93.3468% ( 37) 00:21:13.656 8872.566 - 8922.978: 93.4476% ( 18) 00:21:13.656 8922.978 - 8973.391: 93.5876% ( 25) 00:21:13.656 8973.391 - 9023.803: 93.7500% ( 29) 00:21:13.656 9023.803 - 9074.215: 93.9124% ( 29) 00:21:13.656 9074.215 - 9124.628: 94.0244% ( 20) 00:21:13.656 9124.628 - 9175.040: 94.1140% ( 16) 00:21:13.656 9175.040 - 9225.452: 94.2540% ( 25) 00:21:13.656 9225.452 - 9275.865: 94.3884% ( 24) 00:21:13.656 9275.865 - 9326.277: 94.5453% ( 28) 00:21:13.656 9326.277 - 9376.689: 94.6797% ( 24) 00:21:13.656 9376.689 - 9427.102: 94.8477% ( 30) 00:21:13.656 9427.102 - 9477.514: 95.0213% ( 31) 00:21:13.656 9477.514 - 9527.926: 95.1893% ( 30) 00:21:13.656 9527.926 - 9578.338: 95.3461% ( 28) 00:21:13.656 9578.338 - 9628.751: 95.4861% ( 25) 00:21:13.656 9628.751 - 9679.163: 95.6373% ( 27) 00:21:13.656 9679.163 - 9729.575: 95.7717% ( 24) 00:21:13.656 9729.575 - 9779.988: 95.8949% ( 22) 00:21:13.656 9779.988 - 9830.400: 96.0237% ( 23) 00:21:13.656 9830.400 - 9880.812: 96.1302% ( 19) 00:21:13.656 9880.812 - 9931.225: 96.2590% ( 23) 00:21:13.656 9931.225 - 9981.637: 96.4046% ( 26) 00:21:13.656 9981.637 - 10032.049: 96.5278% ( 22) 00:21:13.656 10032.049 - 10082.462: 96.6286% ( 18) 00:21:13.656 10082.462 - 10132.874: 96.7630% ( 24) 00:21:13.656 10132.874 - 10183.286: 96.8526% ( 16) 00:21:13.656 10183.286 - 10233.698: 96.9758% ( 22) 00:21:13.656 10233.698 - 10284.111: 97.0878% ( 20) 00:21:13.656 10284.111 - 10334.523: 97.1774% ( 16) 00:21:13.656 10334.523 - 10384.935: 97.2614% ( 15) 00:21:13.656 10384.935 - 10435.348: 97.3622% ( 18) 00:21:13.656 10435.348 - 10485.760: 97.4406% ( 14) 00:21:13.656 10485.760 - 10536.172: 97.5470% ( 19) 00:21:13.656 10536.172 - 10586.585: 97.6254% ( 14) 00:21:13.656 10586.585 - 10636.997: 97.7151% ( 16) 00:21:13.656 10636.997 - 10687.409: 97.8551% ( 25) 00:21:13.656 10687.409 - 10737.822: 97.9503% ( 17) 00:21:13.656 10737.822 - 10788.234: 98.0567% ( 19) 00:21:13.656 10788.234 - 10838.646: 98.1519% ( 17) 00:21:13.656 10838.646 - 10889.058: 98.2303% ( 14) 00:21:13.656 10889.058 - 10939.471: 98.3255% ( 17) 00:21:13.656 10939.471 - 10989.883: 98.4095% ( 15) 00:21:13.656 10989.883 - 11040.295: 98.4935% ( 15) 00:21:13.656 11040.295 - 11090.708: 98.5887% ( 17) 00:21:13.656 11090.708 - 11141.120: 98.6727% ( 15) 00:21:13.656 11141.120 - 11191.532: 98.7399% ( 12) 00:21:13.656 11191.532 - 11241.945: 98.8071% ( 12) 00:21:13.656 11241.945 - 11292.357: 98.8743% ( 12) 00:21:13.656 11292.357 - 11342.769: 98.9415% ( 12) 00:21:13.656 11342.769 - 11393.182: 99.0031% ( 11) 00:21:13.656 11393.182 - 11443.594: 99.0479% ( 8) 00:21:13.656 11443.594 - 11494.006: 99.0927% ( 8) 00:21:13.656 11494.006 - 11544.418: 99.1375% ( 8) 00:21:13.656 11544.418 - 11594.831: 99.1879% ( 9) 00:21:13.656 11594.831 - 11645.243: 99.2216% ( 6) 00:21:13.656 11645.243 - 11695.655: 99.2328% ( 2) 00:21:13.656 11695.655 - 11746.068: 99.2440% ( 2) 00:21:13.656 11746.068 - 11796.480: 99.2496% ( 1) 00:21:13.656 11796.480 - 11846.892: 99.2664% ( 3) 00:21:13.656 11846.892 - 11897.305: 99.2776% ( 2) 00:21:13.656 11897.305 - 11947.717: 99.2832% ( 1) 00:21:13.656 13812.972 - 13913.797: 99.2944% ( 2) 00:21:13.656 13913.797 - 14014.622: 99.3112% ( 3) 00:21:13.656 14014.622 - 14115.446: 99.3336% ( 4) 00:21:13.656 14115.446 - 14216.271: 99.3560% ( 4) 00:21:13.656 14216.271 - 14317.095: 99.3784% ( 4) 00:21:13.656 14317.095 - 14417.920: 99.4008% ( 4) 00:21:13.656 14417.920 - 14518.745: 99.4232% ( 4) 00:21:13.656 14518.745 - 14619.569: 99.4400% ( 3) 00:21:13.656 14619.569 - 14720.394: 99.4624% ( 4) 00:21:13.656 14720.394 - 14821.218: 99.4848% ( 4) 00:21:13.656 14821.218 - 14922.043: 99.5016% ( 3) 00:21:13.656 14922.043 - 15022.868: 99.5240% ( 4) 00:21:13.656 15022.868 - 15123.692: 99.5464% ( 4) 00:21:13.656 15123.692 - 15224.517: 99.5632% ( 3) 00:21:13.656 15224.517 - 15325.342: 99.5800% ( 3) 00:21:13.656 15325.342 - 15426.166: 99.5968% ( 3) 00:21:13.656 15426.166 - 15526.991: 99.6136% ( 3) 00:21:13.656 15526.991 - 15627.815: 99.6304% ( 3) 00:21:13.656 15627.815 - 15728.640: 99.6416% ( 2) 00:21:13.656 19963.274 - 20064.098: 99.6472% ( 1) 00:21:13.656 20064.098 - 20164.923: 99.6640% ( 3) 00:21:13.656 20164.923 - 20265.748: 99.6808% ( 3) 00:21:13.656 20265.748 - 20366.572: 99.7032% ( 4) 00:21:13.656 20366.572 - 20467.397: 99.7200% ( 3) 00:21:13.656 20467.397 - 20568.222: 99.7368% ( 3) 00:21:13.656 20568.222 - 20669.046: 99.7592% ( 4) 00:21:13.656 20669.046 - 20769.871: 99.7816% ( 4) 00:21:13.656 20769.871 - 20870.695: 99.8040% ( 4) 00:21:13.656 20870.695 - 20971.520: 99.8264% ( 4) 00:21:13.656 20971.520 - 21072.345: 99.8488% ( 4) 00:21:13.656 21072.345 - 21173.169: 99.8656% ( 3) 00:21:13.656 21173.169 - 21273.994: 99.8880% ( 4) 00:21:13.656 21273.994 - 21374.818: 99.9104% ( 4) 00:21:13.656 21374.818 - 21475.643: 99.9328% ( 4) 00:21:13.656 21475.643 - 21576.468: 99.9552% ( 4) 00:21:13.656 21576.468 - 21677.292: 99.9720% ( 3) 00:21:13.656 21677.292 - 21778.117: 99.9944% ( 4) 00:21:13.656 21778.117 - 21878.942: 100.0000% ( 1) 00:21:13.656 00:21:13.656 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:21:13.656 ============================================================================== 00:21:13.656 Range in us Cumulative IO count 00:21:13.656 5898.240 - 5923.446: 0.0112% ( 2) 00:21:13.656 5923.446 - 5948.652: 0.0448% ( 6) 00:21:13.656 5948.652 - 5973.858: 0.0728% ( 5) 00:21:13.656 5973.858 - 5999.065: 0.1400% ( 12) 00:21:13.656 5999.065 - 6024.271: 0.1848% ( 8) 00:21:13.656 6024.271 - 6049.477: 0.2576% ( 13) 00:21:13.656 6049.477 - 6074.683: 0.3416% ( 15) 00:21:13.656 6074.683 - 6099.889: 0.4032% ( 11) 00:21:13.656 6099.889 - 6125.095: 0.6496% ( 44) 00:21:13.656 6125.095 - 6150.302: 1.0081% ( 64) 00:21:13.656 6150.302 - 6175.508: 1.6129% ( 108) 00:21:13.656 6175.508 - 6200.714: 2.3690% ( 135) 00:21:13.656 6200.714 - 6225.920: 3.5338% ( 208) 00:21:13.656 6225.920 - 6251.126: 5.1859% ( 295) 00:21:13.656 6251.126 - 6276.332: 6.9276% ( 311) 00:21:13.656 6276.332 - 6301.538: 8.4901% ( 279) 00:21:13.656 6301.538 - 6326.745: 9.9014% ( 252) 00:21:13.656 6326.745 - 6351.951: 11.5199% ( 289) 00:21:13.656 6351.951 - 6377.157: 12.9984% ( 264) 00:21:13.656 6377.157 - 6402.363: 14.6113% ( 288) 00:21:13.656 6402.363 - 6427.569: 16.3306% ( 307) 00:21:13.656 6427.569 - 6452.775: 17.9491% ( 289) 00:21:13.656 6452.775 - 6503.188: 21.2870% ( 596) 00:21:13.656 6503.188 - 6553.600: 24.8152% ( 630) 00:21:13.656 6553.600 - 6604.012: 28.5226% ( 662) 00:21:13.656 6604.012 - 6654.425: 32.3813% ( 689) 00:21:13.656 6654.425 - 6704.837: 36.2175% ( 685) 00:21:13.656 6704.837 - 6755.249: 40.0986% ( 693) 00:21:13.656 6755.249 - 6805.662: 44.0804% ( 711) 00:21:13.656 6805.662 - 6856.074: 48.1687% ( 730) 00:21:13.656 6856.074 - 6906.486: 52.2233% ( 724) 00:21:13.656 6906.486 - 6956.898: 56.2164% ( 713) 00:21:13.656 6956.898 - 7007.311: 60.4055% ( 748) 00:21:13.656 7007.311 - 7057.723: 64.4993% ( 731) 00:21:13.656 7057.723 - 7108.135: 68.0780% ( 639) 00:21:13.656 7108.135 - 7158.548: 70.7717% ( 481) 00:21:13.656 7158.548 - 7208.960: 73.1071% ( 417) 00:21:13.656 7208.960 - 7259.372: 74.9104% ( 322) 00:21:13.656 7259.372 - 7309.785: 76.5065% ( 285) 00:21:13.656 7309.785 - 7360.197: 78.0522% ( 276) 00:21:13.656 7360.197 - 7410.609: 79.4075% ( 242) 00:21:13.656 7410.609 - 7461.022: 80.7404% ( 238) 00:21:13.656 7461.022 - 7511.434: 81.9836% ( 222) 00:21:13.656 7511.434 - 7561.846: 83.1821% ( 214) 00:21:13.656 7561.846 - 7612.258: 84.2126% ( 184) 00:21:13.656 7612.258 - 7662.671: 85.0974% ( 158) 00:21:13.656 7662.671 - 7713.083: 85.9263% ( 148) 00:21:13.656 7713.083 - 7763.495: 86.6823% ( 135) 00:21:13.656 7763.495 - 7813.908: 87.3768% ( 124) 00:21:13.656 7813.908 - 7864.320: 88.0320% ( 117) 00:21:13.656 7864.320 - 7914.732: 88.5921% ( 100) 00:21:13.656 7914.732 - 7965.145: 89.0513% ( 82) 00:21:13.656 7965.145 - 8015.557: 89.3817% ( 59) 00:21:13.656 8015.557 - 8065.969: 89.7289% ( 62) 00:21:13.656 8065.969 - 8116.382: 90.0594% ( 59) 00:21:13.656 8116.382 - 8166.794: 90.3338% ( 49) 00:21:13.656 8166.794 - 8217.206: 90.5690% ( 42) 00:21:13.656 8217.206 - 8267.618: 90.7650% ( 35) 00:21:13.656 8267.618 - 8318.031: 90.9722% ( 37) 00:21:13.657 8318.031 - 8368.443: 91.1458% ( 31) 00:21:13.657 8368.443 - 8418.855: 91.3250% ( 32) 00:21:13.657 8418.855 - 8469.268: 91.4875% ( 29) 00:21:13.657 8469.268 - 8519.680: 91.6499% ( 29) 00:21:13.657 8519.680 - 8570.092: 91.8403% ( 34) 00:21:13.657 8570.092 - 8620.505: 92.0307% ( 34) 00:21:13.657 8620.505 - 8670.917: 92.2491% ( 39) 00:21:13.657 8670.917 - 8721.329: 92.4787% ( 41) 00:21:13.657 8721.329 - 8771.742: 92.6411% ( 29) 00:21:13.657 8771.742 - 8822.154: 92.8595% ( 39) 00:21:13.657 8822.154 - 8872.566: 93.0780% ( 39) 00:21:13.657 8872.566 - 8922.978: 93.2908% ( 38) 00:21:13.657 8922.978 - 8973.391: 93.5148% ( 40) 00:21:13.657 8973.391 - 9023.803: 93.6940% ( 32) 00:21:13.657 9023.803 - 9074.215: 93.9012% ( 37) 00:21:13.657 9074.215 - 9124.628: 94.1028% ( 36) 00:21:13.657 9124.628 - 9175.040: 94.2764% ( 31) 00:21:13.657 9175.040 - 9225.452: 94.4724% ( 35) 00:21:13.657 9225.452 - 9275.865: 94.7021% ( 41) 00:21:13.657 9275.865 - 9326.277: 94.9037% ( 36) 00:21:13.657 9326.277 - 9376.689: 95.0661% ( 29) 00:21:13.657 9376.689 - 9427.102: 95.2453% ( 32) 00:21:13.657 9427.102 - 9477.514: 95.4469% ( 36) 00:21:13.657 9477.514 - 9527.926: 95.6037% ( 28) 00:21:13.657 9527.926 - 9578.338: 95.7493% ( 26) 00:21:13.657 9578.338 - 9628.751: 95.9005% ( 27) 00:21:13.657 9628.751 - 9679.163: 96.0181% ( 21) 00:21:13.657 9679.163 - 9729.575: 96.1302% ( 20) 00:21:13.657 9729.575 - 9779.988: 96.2590% ( 23) 00:21:13.657 9779.988 - 9830.400: 96.3486% ( 16) 00:21:13.657 9830.400 - 9880.812: 96.4382% ( 16) 00:21:13.657 9880.812 - 9931.225: 96.5502% ( 20) 00:21:13.657 9931.225 - 9981.637: 96.6902% ( 25) 00:21:13.657 9981.637 - 10032.049: 96.8022% ( 20) 00:21:13.657 10032.049 - 10082.462: 96.9030% ( 18) 00:21:13.657 10082.462 - 10132.874: 96.9870% ( 15) 00:21:13.657 10132.874 - 10183.286: 97.0766% ( 16) 00:21:13.657 10183.286 - 10233.698: 97.1494% ( 13) 00:21:13.657 10233.698 - 10284.111: 97.2166% ( 12) 00:21:13.657 10284.111 - 10334.523: 97.3062% ( 16) 00:21:13.657 10334.523 - 10384.935: 97.4014% ( 17) 00:21:13.657 10384.935 - 10435.348: 97.5078% ( 19) 00:21:13.657 10435.348 - 10485.760: 97.5974% ( 16) 00:21:13.657 10485.760 - 10536.172: 97.6815% ( 15) 00:21:13.657 10536.172 - 10586.585: 97.7655% ( 15) 00:21:13.657 10586.585 - 10636.997: 97.8551% ( 16) 00:21:13.657 10636.997 - 10687.409: 97.9391% ( 15) 00:21:13.657 10687.409 - 10737.822: 98.0679% ( 23) 00:21:13.657 10737.822 - 10788.234: 98.1911% ( 22) 00:21:13.657 10788.234 - 10838.646: 98.3031% ( 20) 00:21:13.657 10838.646 - 10889.058: 98.3983% ( 17) 00:21:13.657 10889.058 - 10939.471: 98.4879% ( 16) 00:21:13.657 10939.471 - 10989.883: 98.5831% ( 17) 00:21:13.657 10989.883 - 11040.295: 98.6783% ( 17) 00:21:13.657 11040.295 - 11090.708: 98.7567% ( 14) 00:21:13.657 11090.708 - 11141.120: 98.8463% ( 16) 00:21:13.657 11141.120 - 11191.532: 98.9583% ( 20) 00:21:13.657 11191.532 - 11241.945: 99.0199% ( 11) 00:21:13.657 11241.945 - 11292.357: 99.0871% ( 12) 00:21:13.657 11292.357 - 11342.769: 99.1375% ( 9) 00:21:13.657 11342.769 - 11393.182: 99.1935% ( 10) 00:21:13.657 11393.182 - 11443.594: 99.2103% ( 3) 00:21:13.657 11443.594 - 11494.006: 99.2272% ( 3) 00:21:13.657 11494.006 - 11544.418: 99.2496% ( 4) 00:21:13.657 11544.418 - 11594.831: 99.2664% ( 3) 00:21:13.657 11594.831 - 11645.243: 99.2832% ( 3) 00:21:13.657 12451.840 - 12502.252: 99.2888% ( 1) 00:21:13.657 12502.252 - 12552.665: 99.2944% ( 1) 00:21:13.657 12552.665 - 12603.077: 99.3056% ( 2) 00:21:13.657 12603.077 - 12653.489: 99.3168% ( 2) 00:21:13.657 12653.489 - 12703.902: 99.3224% ( 1) 00:21:13.657 12703.902 - 12754.314: 99.3336% ( 2) 00:21:13.657 12754.314 - 12804.726: 99.3448% ( 2) 00:21:13.657 12804.726 - 12855.138: 99.3504% ( 1) 00:21:13.657 12855.138 - 12905.551: 99.3616% ( 2) 00:21:13.657 12905.551 - 13006.375: 99.3840% ( 4) 00:21:13.657 13006.375 - 13107.200: 99.4176% ( 6) 00:21:13.657 13107.200 - 13208.025: 99.4344% ( 3) 00:21:13.657 13208.025 - 13308.849: 99.4456% ( 2) 00:21:13.657 13308.849 - 13409.674: 99.4680% ( 4) 00:21:13.657 13409.674 - 13510.498: 99.4792% ( 2) 00:21:13.657 13510.498 - 13611.323: 99.5016% ( 4) 00:21:13.657 13611.323 - 13712.148: 99.5184% ( 3) 00:21:13.657 13712.148 - 13812.972: 99.5408% ( 4) 00:21:13.657 13812.972 - 13913.797: 99.5576% ( 3) 00:21:13.657 13913.797 - 14014.622: 99.5800% ( 4) 00:21:13.657 14014.622 - 14115.446: 99.6024% ( 4) 00:21:13.657 14115.446 - 14216.271: 99.6248% ( 4) 00:21:13.657 14216.271 - 14317.095: 99.6416% ( 3) 00:21:13.657 18551.729 - 18652.554: 99.6472% ( 1) 00:21:13.657 18652.554 - 18753.378: 99.6696% ( 4) 00:21:13.657 18753.378 - 18854.203: 99.6920% ( 4) 00:21:13.657 18854.203 - 18955.028: 99.7088% ( 3) 00:21:13.657 18955.028 - 19055.852: 99.7312% ( 4) 00:21:13.657 19055.852 - 19156.677: 99.7480% ( 3) 00:21:13.657 19156.677 - 19257.502: 99.7704% ( 4) 00:21:13.657 19257.502 - 19358.326: 99.7872% ( 3) 00:21:13.657 19358.326 - 19459.151: 99.8096% ( 4) 00:21:13.657 19459.151 - 19559.975: 99.8320% ( 4) 00:21:13.657 19559.975 - 19660.800: 99.8544% ( 4) 00:21:13.657 19660.800 - 19761.625: 99.8768% ( 4) 00:21:13.657 19761.625 - 19862.449: 99.8936% ( 3) 00:21:13.657 19862.449 - 19963.274: 99.9160% ( 4) 00:21:13.657 19963.274 - 20064.098: 99.9384% ( 4) 00:21:13.657 20064.098 - 20164.923: 99.9608% ( 4) 00:21:13.657 20164.923 - 20265.748: 99.9776% ( 3) 00:21:13.657 20265.748 - 20366.572: 100.0000% ( 4) 00:21:13.657 00:21:13.657 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:21:13.657 ============================================================================== 00:21:13.657 Range in us Cumulative IO count 00:21:13.657 5923.446 - 5948.652: 0.0224% ( 4) 00:21:13.657 5948.652 - 5973.858: 0.0616% ( 7) 00:21:13.657 5973.858 - 5999.065: 0.0896% ( 5) 00:21:13.657 5999.065 - 6024.271: 0.1624% ( 13) 00:21:13.657 6024.271 - 6049.477: 0.2744% ( 20) 00:21:13.657 6049.477 - 6074.683: 0.3640% ( 16) 00:21:13.657 6074.683 - 6099.889: 0.4760% ( 20) 00:21:13.657 6099.889 - 6125.095: 0.5824% ( 19) 00:21:13.657 6125.095 - 6150.302: 0.9185% ( 60) 00:21:13.657 6150.302 - 6175.508: 1.6465% ( 130) 00:21:13.657 6175.508 - 6200.714: 2.7386% ( 195) 00:21:13.657 6200.714 - 6225.920: 3.9091% ( 209) 00:21:13.657 6225.920 - 6251.126: 5.2699% ( 243) 00:21:13.657 6251.126 - 6276.332: 6.8604% ( 284) 00:21:13.657 6276.332 - 6301.538: 8.2549% ( 249) 00:21:13.657 6301.538 - 6326.745: 9.8062% ( 277) 00:21:13.657 6326.745 - 6351.951: 11.4303% ( 290) 00:21:13.657 6351.951 - 6377.157: 13.1104% ( 300) 00:21:13.657 6377.157 - 6402.363: 14.7065% ( 285) 00:21:13.657 6402.363 - 6427.569: 16.3306% ( 290) 00:21:13.657 6427.569 - 6452.775: 18.0836% ( 313) 00:21:13.657 6452.775 - 6503.188: 21.4774% ( 606) 00:21:13.657 6503.188 - 6553.600: 25.0112% ( 631) 00:21:13.657 6553.600 - 6604.012: 28.6402% ( 648) 00:21:13.657 6604.012 - 6654.425: 32.4037% ( 672) 00:21:13.657 6654.425 - 6704.837: 36.3127% ( 698) 00:21:13.657 6704.837 - 6755.249: 40.3394% ( 719) 00:21:13.657 6755.249 - 6805.662: 44.2484% ( 698) 00:21:13.657 6805.662 - 6856.074: 48.3591% ( 734) 00:21:13.657 6856.074 - 6906.486: 52.3578% ( 714) 00:21:13.657 6906.486 - 6956.898: 56.5132% ( 742) 00:21:13.657 6956.898 - 7007.311: 60.6687% ( 742) 00:21:13.657 7007.311 - 7057.723: 64.7401% ( 727) 00:21:13.657 7057.723 - 7108.135: 68.2348% ( 624) 00:21:13.657 7108.135 - 7158.548: 71.0293% ( 499) 00:21:13.657 7158.548 - 7208.960: 73.2079% ( 389) 00:21:13.657 7208.960 - 7259.372: 74.9440% ( 310) 00:21:13.657 7259.372 - 7309.785: 76.5625% ( 289) 00:21:13.657 7309.785 - 7360.197: 77.9962% ( 256) 00:21:13.657 7360.197 - 7410.609: 79.3459% ( 241) 00:21:13.657 7410.609 - 7461.022: 80.6732% ( 237) 00:21:13.657 7461.022 - 7511.434: 81.9556% ( 229) 00:21:13.657 7511.434 - 7561.846: 83.1429% ( 212) 00:21:13.657 7561.846 - 7612.258: 84.1846% ( 186) 00:21:13.657 7612.258 - 7662.671: 85.1142% ( 166) 00:21:13.657 7662.671 - 7713.083: 85.8983% ( 140) 00:21:13.657 7713.083 - 7763.495: 86.6767% ( 139) 00:21:13.658 7763.495 - 7813.908: 87.3936% ( 128) 00:21:13.658 7813.908 - 7864.320: 88.0656% ( 120) 00:21:13.658 7864.320 - 7914.732: 88.6929% ( 112) 00:21:13.658 7914.732 - 7965.145: 89.1353% ( 79) 00:21:13.658 7965.145 - 8015.557: 89.4881% ( 63) 00:21:13.658 8015.557 - 8065.969: 89.7793% ( 52) 00:21:13.658 8065.969 - 8116.382: 90.0482% ( 48) 00:21:13.658 8116.382 - 8166.794: 90.3058% ( 46) 00:21:13.658 8166.794 - 8217.206: 90.5130% ( 37) 00:21:13.658 8217.206 - 8267.618: 90.7762% ( 47) 00:21:13.658 8267.618 - 8318.031: 90.9610% ( 33) 00:21:13.658 8318.031 - 8368.443: 91.1626% ( 36) 00:21:13.658 8368.443 - 8418.855: 91.3418% ( 32) 00:21:13.658 8418.855 - 8469.268: 91.4931% ( 27) 00:21:13.658 8469.268 - 8519.680: 91.6555% ( 29) 00:21:13.658 8519.680 - 8570.092: 91.8459% ( 34) 00:21:13.658 8570.092 - 8620.505: 92.0811% ( 42) 00:21:13.658 8620.505 - 8670.917: 92.2771% ( 35) 00:21:13.658 8670.917 - 8721.329: 92.4619% ( 33) 00:21:13.658 8721.329 - 8771.742: 92.6411% ( 32) 00:21:13.658 8771.742 - 8822.154: 92.8651% ( 40) 00:21:13.658 8822.154 - 8872.566: 93.0668% ( 36) 00:21:13.658 8872.566 - 8922.978: 93.3244% ( 46) 00:21:13.658 8922.978 - 8973.391: 93.4980% ( 31) 00:21:13.658 8973.391 - 9023.803: 93.6772% ( 32) 00:21:13.658 9023.803 - 9074.215: 93.8564% ( 32) 00:21:13.658 9074.215 - 9124.628: 94.0412% ( 33) 00:21:13.658 9124.628 - 9175.040: 94.2204% ( 32) 00:21:13.658 9175.040 - 9225.452: 94.3884% ( 30) 00:21:13.658 9225.452 - 9275.865: 94.5621% ( 31) 00:21:13.658 9275.865 - 9326.277: 94.7357% ( 31) 00:21:13.658 9326.277 - 9376.689: 94.9317% ( 35) 00:21:13.658 9376.689 - 9427.102: 95.0997% ( 30) 00:21:13.658 9427.102 - 9477.514: 95.2565% ( 28) 00:21:13.658 9477.514 - 9527.926: 95.4301% ( 31) 00:21:13.658 9527.926 - 9578.338: 95.5813% ( 27) 00:21:13.658 9578.338 - 9628.751: 95.7493% ( 30) 00:21:13.658 9628.751 - 9679.163: 95.9061% ( 28) 00:21:13.658 9679.163 - 9729.575: 96.0349% ( 23) 00:21:13.658 9729.575 - 9779.988: 96.1750% ( 25) 00:21:13.658 9779.988 - 9830.400: 96.3262% ( 27) 00:21:13.658 9830.400 - 9880.812: 96.4382% ( 20) 00:21:13.658 9880.812 - 9931.225: 96.5334% ( 17) 00:21:13.658 9931.225 - 9981.637: 96.6622% ( 23) 00:21:13.658 9981.637 - 10032.049: 96.7630% ( 18) 00:21:13.658 10032.049 - 10082.462: 96.8638% ( 18) 00:21:13.658 10082.462 - 10132.874: 96.9534% ( 16) 00:21:13.658 10132.874 - 10183.286: 97.0710% ( 21) 00:21:13.658 10183.286 - 10233.698: 97.1998% ( 23) 00:21:13.658 10233.698 - 10284.111: 97.2894% ( 16) 00:21:13.658 10284.111 - 10334.523: 97.3790% ( 16) 00:21:13.658 10334.523 - 10384.935: 97.4686% ( 16) 00:21:13.658 10384.935 - 10435.348: 97.5582% ( 16) 00:21:13.658 10435.348 - 10485.760: 97.6254% ( 12) 00:21:13.658 10485.760 - 10536.172: 97.7039% ( 14) 00:21:13.658 10536.172 - 10586.585: 97.8383% ( 24) 00:21:13.658 10586.585 - 10636.997: 97.9223% ( 15) 00:21:13.658 10636.997 - 10687.409: 98.0231% ( 18) 00:21:13.658 10687.409 - 10737.822: 98.1071% ( 15) 00:21:13.658 10737.822 - 10788.234: 98.2303% ( 22) 00:21:13.658 10788.234 - 10838.646: 98.3199% ( 16) 00:21:13.658 10838.646 - 10889.058: 98.3983% ( 14) 00:21:13.658 10889.058 - 10939.471: 98.4711% ( 13) 00:21:13.658 10939.471 - 10989.883: 98.5551% ( 15) 00:21:13.658 10989.883 - 11040.295: 98.6335% ( 14) 00:21:13.658 11040.295 - 11090.708: 98.7063% ( 13) 00:21:13.658 11090.708 - 11141.120: 98.7679% ( 11) 00:21:13.658 11141.120 - 11191.532: 98.8239% ( 10) 00:21:13.658 11191.532 - 11241.945: 98.8687% ( 8) 00:21:13.658 11241.945 - 11292.357: 98.9247% ( 10) 00:21:13.658 11292.357 - 11342.769: 98.9751% ( 9) 00:21:13.658 11342.769 - 11393.182: 99.0311% ( 10) 00:21:13.658 11393.182 - 11443.594: 99.0927% ( 11) 00:21:13.658 11443.594 - 11494.006: 99.1375% ( 8) 00:21:13.658 11494.006 - 11544.418: 99.1879% ( 9) 00:21:13.658 11544.418 - 11594.831: 99.2440% ( 10) 00:21:13.658 11594.831 - 11645.243: 99.2888% ( 8) 00:21:13.658 11645.243 - 11695.655: 99.3336% ( 8) 00:21:13.658 11695.655 - 11746.068: 99.3616% ( 5) 00:21:13.658 11746.068 - 11796.480: 99.3896% ( 5) 00:21:13.658 11796.480 - 11846.892: 99.4008% ( 2) 00:21:13.658 11846.892 - 11897.305: 99.4120% ( 2) 00:21:13.658 11897.305 - 11947.717: 99.4232% ( 2) 00:21:13.658 11947.717 - 11998.129: 99.4288% ( 1) 00:21:13.658 11998.129 - 12048.542: 99.4400% ( 2) 00:21:13.658 12048.542 - 12098.954: 99.4456% ( 1) 00:21:13.658 12098.954 - 12149.366: 99.4568% ( 2) 00:21:13.658 12149.366 - 12199.778: 99.4680% ( 2) 00:21:13.658 12199.778 - 12250.191: 99.4792% ( 2) 00:21:13.658 12250.191 - 12300.603: 99.4904% ( 2) 00:21:13.658 12300.603 - 12351.015: 99.5016% ( 2) 00:21:13.658 12351.015 - 12401.428: 99.5128% ( 2) 00:21:13.658 12401.428 - 12451.840: 99.5240% ( 2) 00:21:13.658 12451.840 - 12502.252: 99.5352% ( 2) 00:21:13.658 12502.252 - 12552.665: 99.5464% ( 2) 00:21:13.658 12552.665 - 12603.077: 99.5576% ( 2) 00:21:13.658 12603.077 - 12653.489: 99.5688% ( 2) 00:21:13.658 12653.489 - 12703.902: 99.5800% ( 2) 00:21:13.658 12703.902 - 12754.314: 99.5912% ( 2) 00:21:13.658 12754.314 - 12804.726: 99.6024% ( 2) 00:21:13.658 12804.726 - 12855.138: 99.6136% ( 2) 00:21:13.658 12855.138 - 12905.551: 99.6248% ( 2) 00:21:13.658 12905.551 - 13006.375: 99.6416% ( 3) 00:21:13.658 17241.009 - 17341.834: 99.6528% ( 2) 00:21:13.658 17341.834 - 17442.658: 99.6696% ( 3) 00:21:13.658 17442.658 - 17543.483: 99.6920% ( 4) 00:21:13.658 17543.483 - 17644.308: 99.7144% ( 4) 00:21:13.658 17644.308 - 17745.132: 99.7368% ( 4) 00:21:13.658 17745.132 - 17845.957: 99.7592% ( 4) 00:21:13.658 17845.957 - 17946.782: 99.7816% ( 4) 00:21:13.658 17946.782 - 18047.606: 99.7984% ( 3) 00:21:13.658 18047.606 - 18148.431: 99.8208% ( 4) 00:21:13.658 18148.431 - 18249.255: 99.8432% ( 4) 00:21:13.658 18249.255 - 18350.080: 99.8656% ( 4) 00:21:13.658 18350.080 - 18450.905: 99.8880% ( 4) 00:21:13.658 18450.905 - 18551.729: 99.9048% ( 3) 00:21:13.658 18551.729 - 18652.554: 99.9328% ( 5) 00:21:13.658 18652.554 - 18753.378: 99.9552% ( 4) 00:21:13.658 18753.378 - 18854.203: 99.9776% ( 4) 00:21:13.658 18854.203 - 18955.028: 100.0000% ( 4) 00:21:13.658 00:21:13.658 20:19:08 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:21:15.030 Initializing NVMe Controllers 00:21:15.030 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:15.030 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:15.030 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:15.030 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:15.030 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:15.030 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:21:15.030 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:21:15.030 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:21:15.030 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:21:15.030 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:21:15.030 Initialization complete. Launching workers. 00:21:15.030 ======================================================== 00:21:15.030 Latency(us) 00:21:15.030 Device Information : IOPS MiB/s Average min max 00:21:15.030 PCIE (0000:00:10.0) NSID 1 from core 0: 14775.58 173.15 8686.23 6182.54 36641.16 00:21:15.030 PCIE (0000:00:11.0) NSID 1 from core 0: 14775.58 173.15 8675.86 6477.45 35444.10 00:21:15.030 PCIE (0000:00:13.0) NSID 1 from core 0: 14775.58 173.15 8663.23 6185.15 33860.73 00:21:15.030 PCIE (0000:00:12.0) NSID 1 from core 0: 14775.58 173.15 8649.86 6305.49 31432.29 00:21:15.030 PCIE (0000:00:12.0) NSID 2 from core 0: 14775.58 173.15 8637.70 6284.38 29083.30 00:21:15.030 PCIE (0000:00:12.0) NSID 3 from core 0: 14775.58 173.15 8626.21 6413.21 26598.76 00:21:15.030 ======================================================== 00:21:15.030 Total : 88653.47 1038.91 8656.51 6182.54 36641.16 00:21:15.030 00:21:15.030 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:15.030 ================================================================================= 00:21:15.030 1.00000% : 6704.837us 00:21:15.030 10.00000% : 7108.135us 00:21:15.030 25.00000% : 7511.434us 00:21:15.030 50.00000% : 8469.268us 00:21:15.030 75.00000% : 9175.040us 00:21:15.030 90.00000% : 10032.049us 00:21:15.030 95.00000% : 11292.357us 00:21:15.030 98.00000% : 12502.252us 00:21:15.030 99.00000% : 13913.797us 00:21:15.030 99.50000% : 28230.892us 00:21:15.030 99.90000% : 36296.862us 00:21:15.030 99.99000% : 36700.160us 00:21:15.030 99.99900% : 36700.160us 00:21:15.030 99.99990% : 36700.160us 00:21:15.030 99.99999% : 36700.160us 00:21:15.030 00:21:15.030 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:21:15.030 ================================================================================= 00:21:15.030 1.00000% : 6704.837us 00:21:15.030 10.00000% : 7158.548us 00:21:15.030 25.00000% : 7511.434us 00:21:15.030 50.00000% : 8519.680us 00:21:15.030 75.00000% : 9124.628us 00:21:15.030 90.00000% : 9981.637us 00:21:15.030 95.00000% : 11292.357us 00:21:15.030 98.00000% : 12401.428us 00:21:15.030 99.00000% : 13913.797us 00:21:15.030 99.50000% : 27222.646us 00:21:15.030 99.90000% : 35086.966us 00:21:15.030 99.99000% : 35490.265us 00:21:15.030 99.99900% : 35490.265us 00:21:15.030 99.99990% : 35490.265us 00:21:15.030 99.99999% : 35490.265us 00:21:15.030 00:21:15.030 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:21:15.030 ================================================================================= 00:21:15.030 1.00000% : 6755.249us 00:21:15.030 10.00000% : 7158.548us 00:21:15.030 25.00000% : 7511.434us 00:21:15.030 50.00000% : 8469.268us 00:21:15.030 75.00000% : 9124.628us 00:21:15.030 90.00000% : 9981.637us 00:21:15.030 95.00000% : 11090.708us 00:21:15.030 98.00000% : 13006.375us 00:21:15.030 99.00000% : 13913.797us 00:21:15.030 99.50000% : 25811.102us 00:21:15.030 99.90000% : 33473.772us 00:21:15.030 99.99000% : 33877.071us 00:21:15.030 99.99900% : 33877.071us 00:21:15.030 99.99990% : 33877.071us 00:21:15.030 99.99999% : 33877.071us 00:21:15.030 00:21:15.030 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:21:15.030 ================================================================================= 00:21:15.030 1.00000% : 6755.249us 00:21:15.030 10.00000% : 7158.548us 00:21:15.030 25.00000% : 7511.434us 00:21:15.030 50.00000% : 8519.680us 00:21:15.030 75.00000% : 9124.628us 00:21:15.030 90.00000% : 10082.462us 00:21:15.030 95.00000% : 10939.471us 00:21:15.030 98.00000% : 12855.138us 00:21:15.030 99.00000% : 13913.797us 00:21:15.030 99.50000% : 25407.803us 00:21:15.030 99.90000% : 30247.385us 00:21:15.030 99.99000% : 31457.280us 00:21:15.030 99.99900% : 31457.280us 00:21:15.030 99.99990% : 31457.280us 00:21:15.030 99.99999% : 31457.280us 00:21:15.030 00:21:15.030 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:21:15.030 ================================================================================= 00:21:15.030 1.00000% : 6755.249us 00:21:15.030 10.00000% : 7158.548us 00:21:15.030 25.00000% : 7511.434us 00:21:15.030 50.00000% : 8519.680us 00:21:15.030 75.00000% : 9175.040us 00:21:15.030 90.00000% : 9981.637us 00:21:15.030 95.00000% : 10788.234us 00:21:15.030 98.00000% : 12703.902us 00:21:15.030 99.00000% : 14216.271us 00:21:15.030 99.50000% : 23996.258us 00:21:15.030 99.90000% : 29037.489us 00:21:15.030 99.99000% : 29037.489us 00:21:15.030 99.99900% : 29239.138us 00:21:15.030 99.99990% : 29239.138us 00:21:15.030 99.99999% : 29239.138us 00:21:15.030 00:21:15.030 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:21:15.030 ================================================================================= 00:21:15.030 1.00000% : 6755.249us 00:21:15.030 10.00000% : 7158.548us 00:21:15.030 25.00000% : 7511.434us 00:21:15.030 50.00000% : 8469.268us 00:21:15.030 75.00000% : 9124.628us 00:21:15.030 90.00000% : 10032.049us 00:21:15.030 95.00000% : 11040.295us 00:21:15.030 98.00000% : 12603.077us 00:21:15.030 99.00000% : 13913.797us 00:21:15.030 99.50000% : 22282.240us 00:21:15.030 99.90000% : 26617.698us 00:21:15.030 99.99000% : 26617.698us 00:21:15.030 99.99900% : 26617.698us 00:21:15.030 99.99990% : 26617.698us 00:21:15.030 99.99999% : 26617.698us 00:21:15.030 00:21:15.030 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:15.030 ============================================================================== 00:21:15.030 Range in us Cumulative IO count 00:21:15.030 6175.508 - 6200.714: 0.0068% ( 1) 00:21:15.030 6301.538 - 6326.745: 0.0203% ( 2) 00:21:15.030 6326.745 - 6351.951: 0.0271% ( 1) 00:21:15.030 6427.569 - 6452.775: 0.0406% ( 2) 00:21:15.030 6452.775 - 6503.188: 0.0812% ( 6) 00:21:15.030 6503.188 - 6553.600: 0.2029% ( 18) 00:21:15.030 6553.600 - 6604.012: 0.5073% ( 45) 00:21:15.030 6604.012 - 6654.425: 0.9199% ( 61) 00:21:15.030 6654.425 - 6704.837: 1.3190% ( 59) 00:21:15.030 6704.837 - 6755.249: 1.8398% ( 77) 00:21:15.030 6755.249 - 6805.662: 2.7124% ( 129) 00:21:15.030 6805.662 - 6856.074: 3.8149% ( 163) 00:21:15.030 6856.074 - 6906.486: 4.9107% ( 162) 00:21:15.030 6906.486 - 6956.898: 5.8780% ( 143) 00:21:15.030 6956.898 - 7007.311: 7.0008% ( 166) 00:21:15.030 7007.311 - 7057.723: 8.6377% ( 242) 00:21:15.030 7057.723 - 7108.135: 10.6399% ( 296) 00:21:15.030 7108.135 - 7158.548: 12.8247% ( 323) 00:21:15.030 7158.548 - 7208.960: 14.3804% ( 230) 00:21:15.030 7208.960 - 7259.372: 16.0106% ( 241) 00:21:15.030 7259.372 - 7309.785: 17.5460% ( 227) 00:21:15.030 7309.785 - 7360.197: 19.5955% ( 303) 00:21:15.030 7360.197 - 7410.609: 22.0441% ( 362) 00:21:15.030 7410.609 - 7461.022: 23.8569% ( 268) 00:21:15.030 7461.022 - 7511.434: 25.5005% ( 243) 00:21:15.030 7511.434 - 7561.846: 27.2389% ( 257) 00:21:15.030 7561.846 - 7612.258: 29.3764% ( 316) 00:21:15.030 7612.258 - 7662.671: 31.0268% ( 244) 00:21:15.030 7662.671 - 7713.083: 32.4134% ( 205) 00:21:15.030 7713.083 - 7763.495: 33.3333% ( 136) 00:21:15.030 7763.495 - 7813.908: 34.4765% ( 169) 00:21:15.030 7813.908 - 7864.320: 35.7481% ( 188) 00:21:15.030 7864.320 - 7914.732: 37.0874% ( 198) 00:21:15.030 7914.732 - 7965.145: 38.6296% ( 228) 00:21:15.030 7965.145 - 8015.557: 39.9283% ( 192) 00:21:15.030 8015.557 - 8065.969: 41.2946% ( 202) 00:21:15.030 8065.969 - 8116.382: 42.4513% ( 171) 00:21:15.030 8116.382 - 8166.794: 43.5877% ( 168) 00:21:15.030 8166.794 - 8217.206: 44.4873% ( 133) 00:21:15.030 8217.206 - 8267.618: 45.4343% ( 140) 00:21:15.030 8267.618 - 8318.031: 46.4489% ( 150) 00:21:15.030 8318.031 - 8368.443: 47.8693% ( 210) 00:21:15.030 8368.443 - 8418.855: 49.3845% ( 224) 00:21:15.030 8418.855 - 8469.268: 50.8861% ( 222) 00:21:15.030 8469.268 - 8519.680: 52.7394% ( 274) 00:21:15.030 8519.680 - 8570.092: 54.6266% ( 279) 00:21:15.030 8570.092 - 8620.505: 56.6964% ( 306) 00:21:15.030 8620.505 - 8670.917: 59.1247% ( 359) 00:21:15.030 8670.917 - 8721.329: 61.5260% ( 355) 00:21:15.030 8721.329 - 8771.742: 63.3861% ( 275) 00:21:15.030 8771.742 - 8822.154: 64.7863% ( 207) 00:21:15.030 8822.154 - 8872.566: 66.4840% ( 251) 00:21:15.030 8872.566 - 8922.978: 68.0398% ( 230) 00:21:15.030 8922.978 - 8973.391: 69.5752% ( 227) 00:21:15.030 8973.391 - 9023.803: 71.1377% ( 231) 00:21:15.030 9023.803 - 9074.215: 72.6055% ( 217) 00:21:15.030 9074.215 - 9124.628: 74.0260% ( 210) 00:21:15.030 9124.628 - 9175.040: 75.3856% ( 201) 00:21:15.030 9175.040 - 9225.452: 76.5084% ( 166) 00:21:15.030 9225.452 - 9275.865: 77.9221% ( 209) 00:21:15.030 9275.865 - 9326.277: 79.3019% ( 204) 00:21:15.030 9326.277 - 9376.689: 80.5060% ( 178) 00:21:15.030 9376.689 - 9427.102: 81.3785% ( 129) 00:21:15.030 9427.102 - 9477.514: 82.2714% ( 132) 00:21:15.030 9477.514 - 9527.926: 83.1845% ( 135) 00:21:15.030 9527.926 - 9578.338: 84.0909% ( 134) 00:21:15.030 9578.338 - 9628.751: 84.9094% ( 121) 00:21:15.030 9628.751 - 9679.163: 85.9375% ( 152) 00:21:15.030 9679.163 - 9729.575: 86.6274% ( 102) 00:21:15.030 9729.575 - 9779.988: 87.6082% ( 145) 00:21:15.030 9779.988 - 9830.400: 88.3185% ( 105) 00:21:15.030 9830.400 - 9880.812: 88.8731% ( 82) 00:21:15.030 9880.812 - 9931.225: 89.3736% ( 74) 00:21:15.030 9931.225 - 9981.637: 89.9283% ( 82) 00:21:15.030 9981.637 - 10032.049: 90.3341% ( 60) 00:21:15.030 10032.049 - 10082.462: 90.6791% ( 51) 00:21:15.030 10082.462 - 10132.874: 90.9564% ( 41) 00:21:15.030 10132.874 - 10183.286: 91.1932% ( 35) 00:21:15.030 10183.286 - 10233.698: 91.5517% ( 53) 00:21:15.030 10233.698 - 10284.111: 91.8696% ( 47) 00:21:15.030 10284.111 - 10334.523: 92.2822% ( 61) 00:21:15.030 10334.523 - 10384.935: 92.5325% ( 37) 00:21:15.030 10384.935 - 10435.348: 92.8571% ( 48) 00:21:15.030 10435.348 - 10485.760: 93.0939% ( 35) 00:21:15.030 10485.760 - 10536.172: 93.3306% ( 35) 00:21:15.030 10536.172 - 10586.585: 93.5200% ( 28) 00:21:15.030 10586.585 - 10636.997: 93.7568% ( 35) 00:21:15.030 10636.997 - 10687.409: 93.9462% ( 28) 00:21:15.030 10687.409 - 10737.822: 94.0611% ( 17) 00:21:15.030 10737.822 - 10788.234: 94.2032% ( 21) 00:21:15.030 10788.234 - 10838.646: 94.2844% ( 12) 00:21:15.030 10838.646 - 10889.058: 94.3520% ( 10) 00:21:15.031 10889.058 - 10939.471: 94.3723% ( 3) 00:21:15.031 10939.471 - 10989.883: 94.3994% ( 4) 00:21:15.031 10989.883 - 11040.295: 94.4535% ( 8) 00:21:15.031 11040.295 - 11090.708: 94.5211% ( 10) 00:21:15.031 11090.708 - 11141.120: 94.6496% ( 19) 00:21:15.031 11141.120 - 11191.532: 94.7646% ( 17) 00:21:15.031 11191.532 - 11241.945: 94.9675% ( 30) 00:21:15.031 11241.945 - 11292.357: 95.0690% ( 15) 00:21:15.031 11292.357 - 11342.769: 95.1772% ( 16) 00:21:15.031 11342.769 - 11393.182: 95.3125% ( 20) 00:21:15.031 11393.182 - 11443.594: 95.4478% ( 20) 00:21:15.031 11443.594 - 11494.006: 95.6372% ( 28) 00:21:15.031 11494.006 - 11544.418: 95.8063% ( 25) 00:21:15.031 11544.418 - 11594.831: 96.0092% ( 30) 00:21:15.031 11594.831 - 11645.243: 96.1242% ( 17) 00:21:15.031 11645.243 - 11695.655: 96.2595% ( 20) 00:21:15.031 11695.655 - 11746.068: 96.3948% ( 20) 00:21:15.031 11746.068 - 11796.480: 96.5503% ( 23) 00:21:15.031 11796.480 - 11846.892: 96.6788% ( 19) 00:21:15.031 11846.892 - 11897.305: 96.8547% ( 26) 00:21:15.031 11897.305 - 11947.717: 97.0644% ( 31) 00:21:15.031 11947.717 - 11998.129: 97.1929% ( 19) 00:21:15.031 11998.129 - 12048.542: 97.3282% ( 20) 00:21:15.031 12048.542 - 12098.954: 97.5379% ( 31) 00:21:15.031 12098.954 - 12149.366: 97.6799% ( 21) 00:21:15.031 12149.366 - 12199.778: 97.8017% ( 18) 00:21:15.031 12199.778 - 12250.191: 97.8490% ( 7) 00:21:15.031 12250.191 - 12300.603: 97.8896% ( 6) 00:21:15.031 12300.603 - 12351.015: 97.9370% ( 7) 00:21:15.031 12351.015 - 12401.428: 97.9708% ( 5) 00:21:15.031 12401.428 - 12451.840: 97.9978% ( 4) 00:21:15.031 12451.840 - 12502.252: 98.0181% ( 3) 00:21:15.031 12502.252 - 12552.665: 98.0587% ( 6) 00:21:15.031 12552.665 - 12603.077: 98.0925% ( 5) 00:21:15.031 12603.077 - 12653.489: 98.1466% ( 8) 00:21:15.031 12653.489 - 12703.902: 98.1940% ( 7) 00:21:15.031 12703.902 - 12754.314: 98.2278% ( 5) 00:21:15.031 12754.314 - 12804.726: 98.2684% ( 6) 00:21:15.031 12804.726 - 12855.138: 98.3157% ( 7) 00:21:15.031 12855.138 - 12905.551: 98.3428% ( 4) 00:21:15.031 12905.551 - 13006.375: 98.4240% ( 12) 00:21:15.031 13006.375 - 13107.200: 98.5119% ( 13) 00:21:15.031 13107.200 - 13208.025: 98.5795% ( 10) 00:21:15.031 13208.025 - 13308.849: 98.6472% ( 10) 00:21:15.031 13308.849 - 13409.674: 98.7284% ( 12) 00:21:15.031 13409.674 - 13510.498: 98.8095% ( 12) 00:21:15.031 13510.498 - 13611.323: 98.8636% ( 8) 00:21:15.031 13611.323 - 13712.148: 98.9448% ( 12) 00:21:15.031 13712.148 - 13812.972: 98.9854% ( 6) 00:21:15.031 13812.972 - 13913.797: 99.0192% ( 5) 00:21:15.031 13913.797 - 14014.622: 99.0666% ( 7) 00:21:15.031 14014.622 - 14115.446: 99.1071% ( 6) 00:21:15.031 14115.446 - 14216.271: 99.1342% ( 4) 00:21:15.031 26012.751 - 26214.400: 99.1613% ( 4) 00:21:15.031 26214.400 - 26416.049: 99.1951% ( 5) 00:21:15.031 26416.049 - 26617.698: 99.2357% ( 6) 00:21:15.031 26617.698 - 26819.348: 99.2695% ( 5) 00:21:15.031 26819.348 - 27020.997: 99.3101% ( 6) 00:21:15.031 27020.997 - 27222.646: 99.3371% ( 4) 00:21:15.031 27222.646 - 27424.295: 99.3777% ( 6) 00:21:15.031 27424.295 - 27625.945: 99.4183% ( 6) 00:21:15.031 27625.945 - 27827.594: 99.4521% ( 5) 00:21:15.031 27827.594 - 28029.243: 99.4859% ( 5) 00:21:15.031 28029.243 - 28230.892: 99.5198% ( 5) 00:21:15.031 28230.892 - 28432.542: 99.5536% ( 5) 00:21:15.031 28432.542 - 28634.191: 99.5671% ( 2) 00:21:15.031 34078.720 - 34280.369: 99.5806% ( 2) 00:21:15.031 34280.369 - 34482.018: 99.6212% ( 6) 00:21:15.031 34482.018 - 34683.668: 99.6483% ( 4) 00:21:15.031 34683.668 - 34885.317: 99.6821% ( 5) 00:21:15.031 34885.317 - 35086.966: 99.7159% ( 5) 00:21:15.031 35086.966 - 35288.615: 99.7565% ( 6) 00:21:15.031 35288.615 - 35490.265: 99.7835% ( 4) 00:21:15.031 35490.265 - 35691.914: 99.8241% ( 6) 00:21:15.031 35691.914 - 35893.563: 99.8647% ( 6) 00:21:15.031 35893.563 - 36095.212: 99.8985% ( 5) 00:21:15.031 36095.212 - 36296.862: 99.9391% ( 6) 00:21:15.031 36296.862 - 36498.511: 99.9729% ( 5) 00:21:15.031 36498.511 - 36700.160: 100.0000% ( 4) 00:21:15.031 00:21:15.031 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:21:15.031 ============================================================================== 00:21:15.031 Range in us Cumulative IO count 00:21:15.031 6452.775 - 6503.188: 0.0203% ( 3) 00:21:15.031 6503.188 - 6553.600: 0.0473% ( 4) 00:21:15.031 6553.600 - 6604.012: 0.1420% ( 14) 00:21:15.031 6604.012 - 6654.425: 0.5411% ( 59) 00:21:15.031 6654.425 - 6704.837: 1.0890% ( 81) 00:21:15.031 6704.837 - 6755.249: 1.8263% ( 109) 00:21:15.031 6755.249 - 6805.662: 2.2254% ( 59) 00:21:15.031 6805.662 - 6856.074: 2.5771% ( 52) 00:21:15.031 6856.074 - 6906.486: 3.6729% ( 162) 00:21:15.031 6906.486 - 6956.898: 4.9986% ( 196) 00:21:15.031 6956.898 - 7007.311: 5.9794% ( 145) 00:21:15.031 7007.311 - 7057.723: 7.3593% ( 204) 00:21:15.031 7057.723 - 7108.135: 9.0977% ( 257) 00:21:15.031 7108.135 - 7158.548: 10.5925% ( 221) 00:21:15.031 7158.548 - 7208.960: 12.9329% ( 346) 00:21:15.031 7208.960 - 7259.372: 14.8268% ( 280) 00:21:15.031 7259.372 - 7309.785: 16.8966% ( 306) 00:21:15.031 7309.785 - 7360.197: 19.0544% ( 319) 00:21:15.031 7360.197 - 7410.609: 21.3948% ( 346) 00:21:15.031 7410.609 - 7461.022: 23.8975% ( 370) 00:21:15.031 7461.022 - 7511.434: 26.6301% ( 404) 00:21:15.031 7511.434 - 7561.846: 28.8826% ( 333) 00:21:15.031 7561.846 - 7612.258: 29.9175% ( 153) 00:21:15.031 7612.258 - 7662.671: 31.2229% ( 193) 00:21:15.031 7662.671 - 7713.083: 32.2443% ( 151) 00:21:15.031 7713.083 - 7763.495: 33.2725% ( 152) 00:21:15.031 7763.495 - 7813.908: 34.4832% ( 179) 00:21:15.031 7813.908 - 7864.320: 35.6602% ( 174) 00:21:15.031 7864.320 - 7914.732: 36.9724% ( 194) 00:21:15.031 7914.732 - 7965.145: 37.9870% ( 150) 00:21:15.031 7965.145 - 8015.557: 39.0896% ( 163) 00:21:15.031 8015.557 - 8065.969: 40.4153% ( 196) 00:21:15.031 8065.969 - 8116.382: 42.5663% ( 318) 00:21:15.031 8116.382 - 8166.794: 43.9800% ( 209) 00:21:15.031 8166.794 - 8217.206: 44.9472% ( 143) 00:21:15.031 8217.206 - 8267.618: 45.8198% ( 129) 00:21:15.031 8267.618 - 8318.031: 46.8074% ( 146) 00:21:15.031 8318.031 - 8368.443: 47.5311% ( 107) 00:21:15.031 8368.443 - 8418.855: 48.2549% ( 107) 00:21:15.031 8418.855 - 8469.268: 49.1545% ( 133) 00:21:15.031 8469.268 - 8519.680: 50.8523% ( 251) 00:21:15.031 8519.680 - 8570.092: 52.2186% ( 202) 00:21:15.031 8570.092 - 8620.505: 54.1937% ( 292) 00:21:15.031 8620.505 - 8670.917: 55.9524% ( 260) 00:21:15.031 8670.917 - 8721.329: 58.0425% ( 309) 00:21:15.031 8721.329 - 8771.742: 60.6737% ( 389) 00:21:15.031 8771.742 - 8822.154: 62.7638% ( 309) 00:21:15.031 8822.154 - 8872.566: 65.6791% ( 431) 00:21:15.031 8872.566 - 8922.978: 68.3374% ( 393) 00:21:15.031 8922.978 - 8973.391: 70.4004% ( 305) 00:21:15.031 8973.391 - 9023.803: 72.1659% ( 261) 00:21:15.031 9023.803 - 9074.215: 74.2830% ( 313) 00:21:15.031 9074.215 - 9124.628: 75.5276% ( 184) 00:21:15.031 9124.628 - 9175.040: 77.1307% ( 237) 00:21:15.031 9175.040 - 9225.452: 78.7676% ( 242) 00:21:15.031 9225.452 - 9275.865: 79.9513% ( 175) 00:21:15.031 9275.865 - 9326.277: 81.4394% ( 220) 00:21:15.031 9326.277 - 9376.689: 82.7043% ( 187) 00:21:15.031 9376.689 - 9427.102: 83.4416% ( 109) 00:21:15.031 9427.102 - 9477.514: 84.0097% ( 84) 00:21:15.031 9477.514 - 9527.926: 84.6253% ( 91) 00:21:15.031 9527.926 - 9578.338: 85.3558% ( 108) 00:21:15.031 9578.338 - 9628.751: 86.1269% ( 114) 00:21:15.031 9628.751 - 9679.163: 86.5327% ( 60) 00:21:15.031 9679.163 - 9729.575: 87.3106% ( 115) 00:21:15.031 9729.575 - 9779.988: 87.8044% ( 73) 00:21:15.031 9779.988 - 9830.400: 88.3185% ( 76) 00:21:15.031 9830.400 - 9880.812: 89.1775% ( 127) 00:21:15.031 9880.812 - 9931.225: 89.9283% ( 111) 00:21:15.031 9931.225 - 9981.637: 90.6926% ( 113) 00:21:15.031 9981.637 - 10032.049: 91.1999% ( 75) 00:21:15.031 10032.049 - 10082.462: 91.5111% ( 46) 00:21:15.031 10082.462 - 10132.874: 92.0657% ( 82) 00:21:15.031 10132.874 - 10183.286: 92.2957% ( 34) 00:21:15.031 10183.286 - 10233.698: 92.4919% ( 29) 00:21:15.031 10233.698 - 10284.111: 92.7016% ( 31) 00:21:15.031 10284.111 - 10334.523: 92.8977% ( 29) 00:21:15.031 10334.523 - 10384.935: 93.1277% ( 34) 00:21:15.031 10384.935 - 10435.348: 93.2495% ( 18) 00:21:15.031 10435.348 - 10485.760: 93.3712% ( 18) 00:21:15.031 10485.760 - 10536.172: 93.4997% ( 19) 00:21:15.031 10536.172 - 10586.585: 93.7500% ( 37) 00:21:15.031 10586.585 - 10636.997: 93.8515% ( 15) 00:21:15.031 10636.997 - 10687.409: 93.8718% ( 3) 00:21:15.031 10687.409 - 10737.822: 93.8853% ( 2) 00:21:15.031 10737.822 - 10788.234: 93.9056% ( 3) 00:21:15.031 10788.234 - 10838.646: 93.9529% ( 7) 00:21:15.031 10838.646 - 10889.058: 94.0611% ( 16) 00:21:15.031 10889.058 - 10939.471: 94.1761% ( 17) 00:21:15.031 10939.471 - 10989.883: 94.2438% ( 10) 00:21:15.031 10989.883 - 11040.295: 94.5482% ( 45) 00:21:15.031 11040.295 - 11090.708: 94.6631% ( 17) 00:21:15.031 11090.708 - 11141.120: 94.7578% ( 14) 00:21:15.031 11141.120 - 11191.532: 94.8323% ( 11) 00:21:15.031 11191.532 - 11241.945: 94.9675% ( 20) 00:21:15.031 11241.945 - 11292.357: 95.2246% ( 38) 00:21:15.031 11292.357 - 11342.769: 95.4343% ( 31) 00:21:15.031 11342.769 - 11393.182: 95.6101% ( 26) 00:21:15.031 11393.182 - 11443.594: 95.8401% ( 34) 00:21:15.031 11443.594 - 11494.006: 96.0295% ( 28) 00:21:15.031 11494.006 - 11544.418: 96.2189% ( 28) 00:21:15.031 11544.418 - 11594.831: 96.3339% ( 17) 00:21:15.031 11594.831 - 11645.243: 96.4353% ( 15) 00:21:15.031 11645.243 - 11695.655: 96.5436% ( 16) 00:21:15.031 11695.655 - 11746.068: 96.6653% ( 18) 00:21:15.031 11746.068 - 11796.480: 96.7803% ( 17) 00:21:15.031 11796.480 - 11846.892: 96.8615% ( 12) 00:21:15.031 11846.892 - 11897.305: 96.9562% ( 14) 00:21:15.031 11897.305 - 11947.717: 97.0238% ( 10) 00:21:15.031 11947.717 - 11998.129: 97.1388% ( 17) 00:21:15.031 11998.129 - 12048.542: 97.2606% ( 18) 00:21:15.031 12048.542 - 12098.954: 97.3688% ( 16) 00:21:15.031 12098.954 - 12149.366: 97.5649% ( 29) 00:21:15.031 12149.366 - 12199.778: 97.8017% ( 35) 00:21:15.031 12199.778 - 12250.191: 97.8828% ( 12) 00:21:15.031 12250.191 - 12300.603: 97.9302% ( 7) 00:21:15.031 12300.603 - 12351.015: 97.9640% ( 5) 00:21:15.031 12351.015 - 12401.428: 98.0114% ( 7) 00:21:15.031 12401.428 - 12451.840: 98.0384% ( 4) 00:21:15.031 12451.840 - 12502.252: 98.0655% ( 4) 00:21:15.031 12502.252 - 12552.665: 98.0858% ( 3) 00:21:15.031 12552.665 - 12603.077: 98.1264% ( 6) 00:21:15.031 12603.077 - 12653.489: 98.1940% ( 10) 00:21:15.031 12653.489 - 12703.902: 98.2819% ( 13) 00:21:15.031 12703.902 - 12754.314: 98.3225% ( 6) 00:21:15.031 12754.314 - 12804.726: 98.3563% ( 5) 00:21:15.031 12804.726 - 12855.138: 98.4037% ( 7) 00:21:15.031 12855.138 - 12905.551: 98.4375% ( 5) 00:21:15.031 12905.551 - 13006.375: 98.4984% ( 9) 00:21:15.031 13006.375 - 13107.200: 98.5457% ( 7) 00:21:15.031 13107.200 - 13208.025: 98.5863% ( 6) 00:21:15.031 13208.025 - 13308.849: 98.6878% ( 15) 00:21:15.031 13308.849 - 13409.674: 98.7622% ( 11) 00:21:15.031 13409.674 - 13510.498: 98.8433% ( 12) 00:21:15.031 13510.498 - 13611.323: 98.8907% ( 7) 00:21:15.031 13611.323 - 13712.148: 98.9380% ( 7) 00:21:15.031 13712.148 - 13812.972: 98.9854% ( 7) 00:21:15.031 13812.972 - 13913.797: 99.0327% ( 7) 00:21:15.031 13913.797 - 14014.622: 99.0801% ( 7) 00:21:15.031 14014.622 - 14115.446: 99.1274% ( 7) 00:21:15.031 14115.446 - 14216.271: 99.1342% ( 1) 00:21:15.031 25105.329 - 25206.154: 99.1410% ( 1) 00:21:15.031 25206.154 - 25306.978: 99.1613% ( 3) 00:21:15.031 25306.978 - 25407.803: 99.1815% ( 3) 00:21:15.031 25407.803 - 25508.628: 99.2018% ( 3) 00:21:15.031 25508.628 - 25609.452: 99.2154% ( 2) 00:21:15.031 25609.452 - 25710.277: 99.2357% ( 3) 00:21:15.031 25710.277 - 25811.102: 99.2560% ( 3) 00:21:15.031 25811.102 - 26012.751: 99.2965% ( 6) 00:21:15.031 26012.751 - 26214.400: 99.3371% ( 6) 00:21:15.031 26214.400 - 26416.049: 99.3709% ( 5) 00:21:15.031 26416.049 - 26617.698: 99.4115% ( 6) 00:21:15.031 26617.698 - 26819.348: 99.4453% ( 5) 00:21:15.031 26819.348 - 27020.997: 99.4859% ( 6) 00:21:15.031 27020.997 - 27222.646: 99.5198% ( 5) 00:21:15.031 27222.646 - 27424.295: 99.5603% ( 6) 00:21:15.031 27424.295 - 27625.945: 99.5671% ( 1) 00:21:15.031 33070.474 - 33272.123: 99.5739% ( 1) 00:21:15.031 33272.123 - 33473.772: 99.6144% ( 6) 00:21:15.031 33473.772 - 33675.422: 99.6550% ( 6) 00:21:15.031 33675.422 - 33877.071: 99.6889% ( 5) 00:21:15.031 33877.071 - 34078.720: 99.7362% ( 7) 00:21:15.031 34078.720 - 34280.369: 99.7768% ( 6) 00:21:15.031 34280.369 - 34482.018: 99.8174% ( 6) 00:21:15.031 34482.018 - 34683.668: 99.8512% ( 5) 00:21:15.031 34683.668 - 34885.317: 99.8918% ( 6) 00:21:15.031 34885.317 - 35086.966: 99.9324% ( 6) 00:21:15.031 35086.966 - 35288.615: 99.9662% ( 5) 00:21:15.031 35288.615 - 35490.265: 100.0000% ( 5) 00:21:15.031 00:21:15.031 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:21:15.031 ============================================================================== 00:21:15.031 Range in us Cumulative IO count 00:21:15.031 6175.508 - 6200.714: 0.0068% ( 1) 00:21:15.031 6301.538 - 6326.745: 0.0135% ( 1) 00:21:15.031 6377.157 - 6402.363: 0.0203% ( 1) 00:21:15.031 6503.188 - 6553.600: 0.0812% ( 9) 00:21:15.031 6553.600 - 6604.012: 0.2367% ( 23) 00:21:15.031 6604.012 - 6654.425: 0.4329% ( 29) 00:21:15.031 6654.425 - 6704.837: 0.8252% ( 58) 00:21:15.031 6704.837 - 6755.249: 1.4746% ( 96) 00:21:15.031 6755.249 - 6805.662: 1.9345% ( 68) 00:21:15.031 6805.662 - 6856.074: 2.5906% ( 97) 00:21:15.031 6856.074 - 6906.486: 3.3144% ( 107) 00:21:15.031 6906.486 - 6956.898: 4.0652% ( 111) 00:21:15.031 6956.898 - 7007.311: 5.2624% ( 177) 00:21:15.031 7007.311 - 7057.723: 7.1631% ( 281) 00:21:15.031 7057.723 - 7108.135: 9.1924% ( 300) 00:21:15.031 7108.135 - 7158.548: 11.7086% ( 372) 00:21:15.031 7158.548 - 7208.960: 14.0557% ( 347) 00:21:15.031 7208.960 - 7259.372: 16.1458% ( 309) 00:21:15.031 7259.372 - 7309.785: 18.3374% ( 324) 00:21:15.031 7309.785 - 7360.197: 20.1975% ( 275) 00:21:15.031 7360.197 - 7410.609: 22.3011% ( 311) 00:21:15.031 7410.609 - 7461.022: 24.1410% ( 272) 00:21:15.031 7461.022 - 7511.434: 25.9943% ( 274) 00:21:15.031 7511.434 - 7561.846: 28.3212% ( 344) 00:21:15.031 7561.846 - 7612.258: 29.8634% ( 228) 00:21:15.031 7612.258 - 7662.671: 31.4394% ( 233) 00:21:15.031 7662.671 - 7713.083: 32.7313% ( 191) 00:21:15.031 7713.083 - 7763.495: 33.7527% ( 151) 00:21:15.031 7763.495 - 7813.908: 35.2002% ( 214) 00:21:15.031 7813.908 - 7864.320: 36.2622% ( 157) 00:21:15.031 7864.320 - 7914.732: 37.4729% ( 179) 00:21:15.031 7914.732 - 7965.145: 38.9813% ( 223) 00:21:15.031 7965.145 - 8015.557: 40.2936% ( 194) 00:21:15.031 8015.557 - 8065.969: 41.4096% ( 165) 00:21:15.031 8065.969 - 8116.382: 42.4107% ( 148) 00:21:15.031 8116.382 - 8166.794: 43.7568% ( 199) 00:21:15.032 8166.794 - 8217.206: 44.9743% ( 180) 00:21:15.032 8217.206 - 8267.618: 46.0430% ( 158) 00:21:15.032 8267.618 - 8318.031: 46.8479% ( 119) 00:21:15.032 8318.031 - 8368.443: 47.6596% ( 120) 00:21:15.032 8368.443 - 8418.855: 48.8163% ( 171) 00:21:15.032 8418.855 - 8469.268: 50.1015% ( 190) 00:21:15.032 8469.268 - 8519.680: 51.5287% ( 211) 00:21:15.032 8519.680 - 8570.092: 52.8747% ( 199) 00:21:15.032 8570.092 - 8620.505: 54.6266% ( 259) 00:21:15.032 8620.505 - 8670.917: 56.7302% ( 311) 00:21:15.032 8670.917 - 8721.329: 58.2454% ( 224) 00:21:15.032 8721.329 - 8771.742: 59.8214% ( 233) 00:21:15.032 8771.742 - 8822.154: 62.6218% ( 414) 00:21:15.032 8822.154 - 8872.566: 65.5844% ( 438) 00:21:15.032 8872.566 - 8922.978: 68.6012% ( 446) 00:21:15.032 8922.978 - 8973.391: 71.3136% ( 401) 00:21:15.032 8973.391 - 9023.803: 72.9234% ( 238) 00:21:15.032 9023.803 - 9074.215: 74.5130% ( 235) 00:21:15.032 9074.215 - 9124.628: 76.1161% ( 237) 00:21:15.032 9124.628 - 9175.040: 78.1115% ( 295) 00:21:15.032 9175.040 - 9225.452: 79.2478% ( 168) 00:21:15.032 9225.452 - 9275.865: 80.4045% ( 171) 00:21:15.032 9275.865 - 9326.277: 81.2365% ( 123) 00:21:15.032 9326.277 - 9376.689: 82.2646% ( 152) 00:21:15.032 9376.689 - 9427.102: 82.9343% ( 99) 00:21:15.032 9427.102 - 9477.514: 83.4821% ( 81) 00:21:15.032 9477.514 - 9527.926: 84.0909% ( 90) 00:21:15.032 9527.926 - 9578.338: 84.6388% ( 81) 00:21:15.032 9578.338 - 9628.751: 85.2340% ( 88) 00:21:15.032 9628.751 - 9679.163: 86.0051% ( 114) 00:21:15.032 9679.163 - 9729.575: 86.8980% ( 132) 00:21:15.032 9729.575 - 9779.988: 87.7029% ( 119) 00:21:15.032 9779.988 - 9830.400: 88.3861% ( 101) 00:21:15.032 9830.400 - 9880.812: 89.0219% ( 94) 00:21:15.032 9880.812 - 9931.225: 89.5360% ( 76) 00:21:15.032 9931.225 - 9981.637: 90.1583% ( 92) 00:21:15.032 9981.637 - 10032.049: 90.5032% ( 51) 00:21:15.032 10032.049 - 10082.462: 90.8753% ( 55) 00:21:15.032 10082.462 - 10132.874: 91.4637% ( 87) 00:21:15.032 10132.874 - 10183.286: 91.7884% ( 48) 00:21:15.032 10183.286 - 10233.698: 92.1807% ( 58) 00:21:15.032 10233.698 - 10284.111: 92.4784% ( 44) 00:21:15.032 10284.111 - 10334.523: 92.7422% ( 39) 00:21:15.032 10334.523 - 10384.935: 92.9586% ( 32) 00:21:15.032 10384.935 - 10435.348: 93.1480% ( 28) 00:21:15.032 10435.348 - 10485.760: 93.3847% ( 35) 00:21:15.032 10485.760 - 10536.172: 93.4794% ( 14) 00:21:15.032 10536.172 - 10586.585: 93.6012% ( 18) 00:21:15.032 10586.585 - 10636.997: 93.8312% ( 34) 00:21:15.032 10636.997 - 10687.409: 93.9462% ( 17) 00:21:15.032 10687.409 - 10737.822: 94.0950% ( 22) 00:21:15.032 10737.822 - 10788.234: 94.2032% ( 16) 00:21:15.032 10788.234 - 10838.646: 94.3249% ( 18) 00:21:15.032 10838.646 - 10889.058: 94.4805% ( 23) 00:21:15.032 10889.058 - 10939.471: 94.6226% ( 21) 00:21:15.032 10939.471 - 10989.883: 94.7781% ( 23) 00:21:15.032 10989.883 - 11040.295: 94.9405% ( 24) 00:21:15.032 11040.295 - 11090.708: 95.1772% ( 35) 00:21:15.032 11090.708 - 11141.120: 95.3869% ( 31) 00:21:15.032 11141.120 - 11191.532: 95.5560% ( 25) 00:21:15.032 11191.532 - 11241.945: 95.6778% ( 18) 00:21:15.032 11241.945 - 11292.357: 95.7995% ( 18) 00:21:15.032 11292.357 - 11342.769: 95.9348% ( 20) 00:21:15.032 11342.769 - 11393.182: 96.0565% ( 18) 00:21:15.032 11393.182 - 11443.594: 96.1986% ( 21) 00:21:15.032 11443.594 - 11494.006: 96.3339% ( 20) 00:21:15.032 11494.006 - 11544.418: 96.4421% ( 16) 00:21:15.032 11544.418 - 11594.831: 96.5368% ( 14) 00:21:15.032 11594.831 - 11645.243: 96.6044% ( 10) 00:21:15.032 11645.243 - 11695.655: 96.6518% ( 7) 00:21:15.032 11695.655 - 11746.068: 96.6924% ( 6) 00:21:15.032 11746.068 - 11796.480: 96.7465% ( 8) 00:21:15.032 11796.480 - 11846.892: 96.7938% ( 7) 00:21:15.032 11846.892 - 11897.305: 96.8141% ( 3) 00:21:15.032 11897.305 - 11947.717: 96.8412% ( 4) 00:21:15.032 11947.717 - 11998.129: 96.8615% ( 3) 00:21:15.032 11998.129 - 12048.542: 96.8818% ( 3) 00:21:15.032 12048.542 - 12098.954: 96.9088% ( 4) 00:21:15.032 12098.954 - 12149.366: 96.9359% ( 4) 00:21:15.032 12149.366 - 12199.778: 96.9697% ( 5) 00:21:15.032 12199.778 - 12250.191: 97.0035% ( 5) 00:21:15.032 12250.191 - 12300.603: 97.0915% ( 13) 00:21:15.032 12300.603 - 12351.015: 97.2267% ( 20) 00:21:15.032 12351.015 - 12401.428: 97.3011% ( 11) 00:21:15.032 12401.428 - 12451.840: 97.3417% ( 6) 00:21:15.032 12451.840 - 12502.252: 97.3823% ( 6) 00:21:15.032 12502.252 - 12552.665: 97.4229% ( 6) 00:21:15.032 12552.665 - 12603.077: 97.4702% ( 7) 00:21:15.032 12603.077 - 12653.489: 97.5446% ( 11) 00:21:15.032 12653.489 - 12703.902: 97.6190% ( 11) 00:21:15.032 12703.902 - 12754.314: 97.7070% ( 13) 00:21:15.032 12754.314 - 12804.726: 97.7679% ( 9) 00:21:15.032 12804.726 - 12855.138: 97.8490% ( 12) 00:21:15.032 12855.138 - 12905.551: 97.9234% ( 11) 00:21:15.032 12905.551 - 13006.375: 98.0519% ( 19) 00:21:15.032 13006.375 - 13107.200: 98.2481% ( 29) 00:21:15.032 13107.200 - 13208.025: 98.5457% ( 44) 00:21:15.032 13208.025 - 13308.849: 98.6810% ( 20) 00:21:15.032 13308.849 - 13409.674: 98.7892% ( 16) 00:21:15.032 13409.674 - 13510.498: 98.8704% ( 12) 00:21:15.032 13510.498 - 13611.323: 98.9110% ( 6) 00:21:15.032 13611.323 - 13712.148: 98.9516% ( 6) 00:21:15.032 13712.148 - 13812.972: 98.9922% ( 6) 00:21:15.032 13812.972 - 13913.797: 99.0395% ( 7) 00:21:15.032 13913.797 - 14014.622: 99.0801% ( 6) 00:21:15.032 14014.622 - 14115.446: 99.1207% ( 6) 00:21:15.032 14115.446 - 14216.271: 99.1342% ( 2) 00:21:15.032 23794.609 - 23895.434: 99.1545% ( 3) 00:21:15.032 23895.434 - 23996.258: 99.1748% ( 3) 00:21:15.032 23996.258 - 24097.083: 99.1883% ( 2) 00:21:15.032 24097.083 - 24197.908: 99.2086% ( 3) 00:21:15.032 24197.908 - 24298.732: 99.2289% ( 3) 00:21:15.032 24298.732 - 24399.557: 99.2492% ( 3) 00:21:15.032 24399.557 - 24500.382: 99.2560% ( 1) 00:21:15.032 24500.382 - 24601.206: 99.2762% ( 3) 00:21:15.032 24601.206 - 24702.031: 99.2965% ( 3) 00:21:15.032 24702.031 - 24802.855: 99.3168% ( 3) 00:21:15.032 24802.855 - 24903.680: 99.3304% ( 2) 00:21:15.032 24903.680 - 25004.505: 99.3506% ( 3) 00:21:15.032 25004.505 - 25105.329: 99.3709% ( 3) 00:21:15.032 25105.329 - 25206.154: 99.3912% ( 3) 00:21:15.032 25206.154 - 25306.978: 99.4115% ( 3) 00:21:15.032 25306.978 - 25407.803: 99.4318% ( 3) 00:21:15.032 25407.803 - 25508.628: 99.4521% ( 3) 00:21:15.032 25508.628 - 25609.452: 99.4656% ( 2) 00:21:15.032 25609.452 - 25710.277: 99.4859% ( 3) 00:21:15.032 25710.277 - 25811.102: 99.5062% ( 3) 00:21:15.032 25811.102 - 26012.751: 99.5468% ( 6) 00:21:15.032 26012.751 - 26214.400: 99.5671% ( 3) 00:21:15.032 31860.578 - 32062.228: 99.5739% ( 1) 00:21:15.032 32062.228 - 32263.877: 99.6144% ( 6) 00:21:15.032 32263.877 - 32465.526: 99.6618% ( 7) 00:21:15.032 32465.526 - 32667.175: 99.7091% ( 7) 00:21:15.032 32667.175 - 32868.825: 99.7565% ( 7) 00:21:15.032 32868.825 - 33070.474: 99.8106% ( 8) 00:21:15.032 33070.474 - 33272.123: 99.8580% ( 7) 00:21:15.032 33272.123 - 33473.772: 99.9053% ( 7) 00:21:15.032 33473.772 - 33675.422: 99.9527% ( 7) 00:21:15.032 33675.422 - 33877.071: 100.0000% ( 7) 00:21:15.032 00:21:15.032 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:21:15.032 ============================================================================== 00:21:15.032 Range in us Cumulative IO count 00:21:15.032 6301.538 - 6326.745: 0.0068% ( 1) 00:21:15.032 6351.951 - 6377.157: 0.0135% ( 1) 00:21:15.032 6452.775 - 6503.188: 0.0203% ( 1) 00:21:15.032 6503.188 - 6553.600: 0.0609% ( 6) 00:21:15.032 6553.600 - 6604.012: 0.1488% ( 13) 00:21:15.032 6604.012 - 6654.425: 0.3314% ( 27) 00:21:15.032 6654.425 - 6704.837: 0.7508% ( 62) 00:21:15.032 6704.837 - 6755.249: 1.3393% ( 87) 00:21:15.032 6755.249 - 6805.662: 1.8060% ( 69) 00:21:15.032 6805.662 - 6856.074: 2.4554% ( 96) 00:21:15.032 6856.074 - 6906.486: 3.0777% ( 92) 00:21:15.032 6906.486 - 6956.898: 4.1599% ( 160) 00:21:15.032 6956.898 - 7007.311: 5.2827% ( 166) 00:21:15.032 7007.311 - 7057.723: 6.9602% ( 248) 00:21:15.032 7057.723 - 7108.135: 9.0706% ( 312) 00:21:15.032 7108.135 - 7158.548: 11.3569% ( 338) 00:21:15.032 7158.548 - 7208.960: 13.5620% ( 326) 00:21:15.032 7208.960 - 7259.372: 15.6656% ( 311) 00:21:15.032 7259.372 - 7309.785: 18.1345% ( 365) 00:21:15.032 7309.785 - 7360.197: 20.2043% ( 306) 00:21:15.032 7360.197 - 7410.609: 22.4026% ( 325) 00:21:15.032 7410.609 - 7461.022: 24.7024% ( 340) 00:21:15.032 7461.022 - 7511.434: 26.6572% ( 289) 00:21:15.032 7511.434 - 7561.846: 28.6932% ( 301) 00:21:15.032 7561.846 - 7612.258: 29.8837% ( 176) 00:21:15.032 7612.258 - 7662.671: 31.6897% ( 267) 00:21:15.032 7662.671 - 7713.083: 33.0425% ( 200) 00:21:15.032 7713.083 - 7763.495: 33.9556% ( 135) 00:21:15.032 7763.495 - 7813.908: 35.4437% ( 220) 00:21:15.032 7813.908 - 7864.320: 36.4989% ( 156) 00:21:15.032 7864.320 - 7914.732: 37.7300% ( 182) 00:21:15.032 7914.732 - 7965.145: 38.9949% ( 187) 00:21:15.032 7965.145 - 8015.557: 40.2192% ( 181) 00:21:15.032 8015.557 - 8065.969: 41.2067% ( 146) 00:21:15.032 8065.969 - 8116.382: 42.1807% ( 144) 00:21:15.032 8116.382 - 8166.794: 43.4050% ( 181) 00:21:15.032 8166.794 - 8217.206: 44.7511% ( 199) 00:21:15.032 8217.206 - 8267.618: 46.1580% ( 208) 00:21:15.032 8267.618 - 8318.031: 46.9359% ( 115) 00:21:15.032 8318.031 - 8368.443: 47.7070% ( 114) 00:21:15.032 8368.443 - 8418.855: 48.6878% ( 145) 00:21:15.032 8418.855 - 8469.268: 49.9729% ( 190) 00:21:15.032 8469.268 - 8519.680: 51.0484% ( 159) 00:21:15.032 8519.680 - 8570.092: 52.4012% ( 200) 00:21:15.032 8570.092 - 8620.505: 54.2817% ( 278) 00:21:15.032 8620.505 - 8670.917: 56.1012% ( 269) 00:21:15.032 8670.917 - 8721.329: 58.1575% ( 304) 00:21:15.032 8721.329 - 8771.742: 60.1461% ( 294) 00:21:15.032 8771.742 - 8822.154: 63.1155% ( 439) 00:21:15.032 8822.154 - 8872.566: 65.8415% ( 403) 00:21:15.032 8872.566 - 8922.978: 68.8582% ( 446) 00:21:15.032 8922.978 - 8973.391: 70.8942% ( 301) 00:21:15.032 8973.391 - 9023.803: 72.7881% ( 280) 00:21:15.032 9023.803 - 9074.215: 74.2830% ( 221) 00:21:15.032 9074.215 - 9124.628: 75.7170% ( 212) 00:21:15.032 9124.628 - 9175.040: 77.4418% ( 255) 00:21:15.032 9175.040 - 9225.452: 78.6458% ( 178) 00:21:15.032 9225.452 - 9275.865: 79.9242% ( 189) 00:21:15.032 9275.865 - 9326.277: 80.6953% ( 114) 00:21:15.032 9326.277 - 9376.689: 81.7708% ( 159) 00:21:15.032 9376.689 - 9427.102: 82.9343% ( 172) 00:21:15.032 9427.102 - 9477.514: 83.5565% ( 92) 00:21:15.032 9477.514 - 9527.926: 84.0841% ( 78) 00:21:15.032 9527.926 - 9578.338: 84.6861% ( 89) 00:21:15.032 9578.338 - 9628.751: 85.3490% ( 98) 00:21:15.032 9628.751 - 9679.163: 86.4719% ( 166) 00:21:15.032 9679.163 - 9729.575: 87.1618% ( 102) 00:21:15.032 9729.575 - 9779.988: 87.5947% ( 64) 00:21:15.032 9779.988 - 9830.400: 87.9261% ( 49) 00:21:15.032 9830.400 - 9880.812: 88.4605% ( 79) 00:21:15.032 9880.812 - 9931.225: 89.0287% ( 84) 00:21:15.032 9931.225 - 9981.637: 89.5360% ( 75) 00:21:15.032 9981.637 - 10032.049: 89.9418% ( 60) 00:21:15.032 10032.049 - 10082.462: 90.4085% ( 69) 00:21:15.032 10082.462 - 10132.874: 91.1797% ( 114) 00:21:15.032 10132.874 - 10183.286: 91.6599% ( 71) 00:21:15.032 10183.286 - 10233.698: 92.2010% ( 80) 00:21:15.032 10233.698 - 10284.111: 92.5866% ( 57) 00:21:15.032 10284.111 - 10334.523: 92.8166% ( 34) 00:21:15.032 10334.523 - 10384.935: 93.0668% ( 37) 00:21:15.032 10384.935 - 10435.348: 93.2765% ( 31) 00:21:15.032 10435.348 - 10485.760: 93.5471% ( 40) 00:21:15.032 10485.760 - 10536.172: 93.7094% ( 24) 00:21:15.032 10536.172 - 10586.585: 93.8988% ( 28) 00:21:15.032 10586.585 - 10636.997: 94.2505% ( 52) 00:21:15.032 10636.997 - 10687.409: 94.3994% ( 22) 00:21:15.032 10687.409 - 10737.822: 94.5076% ( 16) 00:21:15.032 10737.822 - 10788.234: 94.6767% ( 25) 00:21:15.032 10788.234 - 10838.646: 94.8187% ( 21) 00:21:15.032 10838.646 - 10889.058: 94.9811% ( 24) 00:21:15.032 10889.058 - 10939.471: 95.1096% ( 19) 00:21:15.032 10939.471 - 10989.883: 95.1907% ( 12) 00:21:15.032 10989.883 - 11040.295: 95.2922% ( 15) 00:21:15.032 11040.295 - 11090.708: 95.5695% ( 41) 00:21:15.032 11090.708 - 11141.120: 95.6372% ( 10) 00:21:15.032 11141.120 - 11191.532: 95.7183% ( 12) 00:21:15.032 11191.532 - 11241.945: 95.7589% ( 6) 00:21:15.032 11241.945 - 11292.357: 95.7995% ( 6) 00:21:15.032 11292.357 - 11342.769: 95.8469% ( 7) 00:21:15.032 11342.769 - 11393.182: 95.9280% ( 12) 00:21:15.032 11393.182 - 11443.594: 96.0430% ( 17) 00:21:15.032 11443.594 - 11494.006: 96.1580% ( 17) 00:21:15.032 11494.006 - 11544.418: 96.2324% ( 11) 00:21:15.032 11544.418 - 11594.831: 96.3203% ( 13) 00:21:15.032 11594.831 - 11645.243: 96.3812% ( 9) 00:21:15.032 11645.243 - 11695.655: 96.4218% ( 6) 00:21:15.032 11695.655 - 11746.068: 96.4624% ( 6) 00:21:15.032 11746.068 - 11796.480: 96.5097% ( 7) 00:21:15.032 11796.480 - 11846.892: 96.5503% ( 6) 00:21:15.032 11846.892 - 11897.305: 96.5977% ( 7) 00:21:15.032 11897.305 - 11947.717: 96.6383% ( 6) 00:21:15.032 11947.717 - 11998.129: 96.6856% ( 7) 00:21:15.032 11998.129 - 12048.542: 96.7262% ( 6) 00:21:15.032 12048.542 - 12098.954: 96.8006% ( 11) 00:21:15.032 12098.954 - 12149.366: 96.9156% ( 17) 00:21:15.032 12149.366 - 12199.778: 97.0509% ( 20) 00:21:15.032 12199.778 - 12250.191: 97.1929% ( 21) 00:21:15.032 12250.191 - 12300.603: 97.3282% ( 20) 00:21:15.032 12300.603 - 12351.015: 97.4567% ( 19) 00:21:15.032 12351.015 - 12401.428: 97.5176% ( 9) 00:21:15.032 12401.428 - 12451.840: 97.5988% ( 12) 00:21:15.032 12451.840 - 12502.252: 97.6867% ( 13) 00:21:15.032 12502.252 - 12552.665: 97.7340% ( 7) 00:21:15.032 12552.665 - 12603.077: 97.7746% ( 6) 00:21:15.032 12603.077 - 12653.489: 97.8084% ( 5) 00:21:15.032 12653.489 - 12703.902: 97.8558% ( 7) 00:21:15.032 12703.902 - 12754.314: 97.9099% ( 8) 00:21:15.032 12754.314 - 12804.726: 97.9573% ( 7) 00:21:15.032 12804.726 - 12855.138: 98.0114% ( 8) 00:21:15.032 12855.138 - 12905.551: 98.0655% ( 8) 00:21:15.032 12905.551 - 13006.375: 98.2143% ( 22) 00:21:15.032 13006.375 - 13107.200: 98.3902% ( 26) 00:21:15.032 13107.200 - 13208.025: 98.4646% ( 11) 00:21:15.032 13208.025 - 13308.849: 98.5390% ( 11) 00:21:15.032 13308.849 - 13409.674: 98.6134% ( 11) 00:21:15.032 13409.674 - 13510.498: 98.6878% ( 11) 00:21:15.032 13510.498 - 13611.323: 98.7757% ( 13) 00:21:15.032 13611.323 - 13712.148: 98.8501% ( 11) 00:21:15.032 13712.148 - 13812.972: 98.9313% ( 12) 00:21:15.032 13812.972 - 13913.797: 99.0124% ( 12) 00:21:15.032 13913.797 - 14014.622: 99.0869% ( 11) 00:21:15.032 14014.622 - 14115.446: 99.1342% ( 7) 00:21:15.032 23492.135 - 23592.960: 99.1410% ( 1) 00:21:15.032 23592.960 - 23693.785: 99.1545% ( 2) 00:21:15.032 23693.785 - 23794.609: 99.1815% ( 4) 00:21:15.032 23794.609 - 23895.434: 99.2018% ( 3) 00:21:15.032 23895.434 - 23996.258: 99.2221% ( 3) 00:21:15.032 23996.258 - 24097.083: 99.2560% ( 5) 00:21:15.032 24097.083 - 24197.908: 99.2830% ( 4) 00:21:15.032 24197.908 - 24298.732: 99.3101% ( 4) 00:21:15.032 24298.732 - 24399.557: 99.3371% ( 4) 00:21:15.032 24399.557 - 24500.382: 99.3506% ( 2) 00:21:15.032 24500.382 - 24601.206: 99.3709% ( 3) 00:21:15.033 24601.206 - 24702.031: 99.3845% ( 2) 00:21:15.033 24702.031 - 24802.855: 99.4048% ( 3) 00:21:15.033 24802.855 - 24903.680: 99.4183% ( 2) 00:21:15.033 24903.680 - 25004.505: 99.4318% ( 2) 00:21:15.033 25004.505 - 25105.329: 99.4521% ( 3) 00:21:15.033 25105.329 - 25206.154: 99.4656% ( 2) 00:21:15.033 25206.154 - 25306.978: 99.4859% ( 3) 00:21:15.033 25306.978 - 25407.803: 99.5062% ( 3) 00:21:15.033 25407.803 - 25508.628: 99.5265% ( 3) 00:21:15.033 25508.628 - 25609.452: 99.5400% ( 2) 00:21:15.033 25609.452 - 25710.277: 99.5603% ( 3) 00:21:15.033 25710.277 - 25811.102: 99.5671% ( 1) 00:21:15.033 29037.489 - 29239.138: 99.6077% ( 6) 00:21:15.033 29239.138 - 29440.788: 99.6550% ( 7) 00:21:15.033 29440.788 - 29642.437: 99.7024% ( 7) 00:21:15.033 29642.437 - 29844.086: 99.7565% ( 8) 00:21:15.033 29844.086 - 30045.735: 99.8782% ( 18) 00:21:15.033 30045.735 - 30247.385: 99.9256% ( 7) 00:21:15.033 31053.982 - 31255.631: 99.9527% ( 4) 00:21:15.033 31255.631 - 31457.280: 100.0000% ( 7) 00:21:15.033 00:21:15.033 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:21:15.033 ============================================================================== 00:21:15.033 Range in us Cumulative IO count 00:21:15.033 6276.332 - 6301.538: 0.0135% ( 2) 00:21:15.033 6427.569 - 6452.775: 0.0338% ( 3) 00:21:15.033 6452.775 - 6503.188: 0.0406% ( 1) 00:21:15.033 6503.188 - 6553.600: 0.0609% ( 3) 00:21:15.033 6553.600 - 6604.012: 0.1420% ( 12) 00:21:15.033 6604.012 - 6654.425: 0.2909% ( 22) 00:21:15.033 6654.425 - 6704.837: 0.4938% ( 30) 00:21:15.033 6704.837 - 6755.249: 1.0417% ( 81) 00:21:15.033 6755.249 - 6805.662: 1.4949% ( 67) 00:21:15.033 6805.662 - 6856.074: 2.1104% ( 91) 00:21:15.033 6856.074 - 6906.486: 2.9018% ( 117) 00:21:15.033 6906.486 - 6956.898: 4.2208% ( 195) 00:21:15.033 6956.898 - 7007.311: 5.5262% ( 193) 00:21:15.033 7007.311 - 7057.723: 7.7652% ( 331) 00:21:15.033 7057.723 - 7108.135: 9.9567% ( 324) 00:21:15.033 7108.135 - 7158.548: 11.9995% ( 302) 00:21:15.033 7158.548 - 7208.960: 13.9678% ( 291) 00:21:15.033 7208.960 - 7259.372: 16.0106% ( 302) 00:21:15.033 7259.372 - 7309.785: 17.8369% ( 270) 00:21:15.033 7309.785 - 7360.197: 19.7511% ( 283) 00:21:15.033 7360.197 - 7410.609: 22.0373% ( 338) 00:21:15.033 7410.609 - 7461.022: 24.5942% ( 378) 00:21:15.033 7461.022 - 7511.434: 26.5354% ( 287) 00:21:15.033 7511.434 - 7561.846: 28.6391% ( 311) 00:21:15.033 7561.846 - 7612.258: 30.3233% ( 249) 00:21:15.033 7612.258 - 7662.671: 31.7911% ( 217) 00:21:15.033 7662.671 - 7713.083: 33.1034% ( 194) 00:21:15.033 7713.083 - 7763.495: 34.2803% ( 174) 00:21:15.033 7763.495 - 7813.908: 35.3220% ( 154) 00:21:15.033 7813.908 - 7864.320: 36.6748% ( 200) 00:21:15.033 7864.320 - 7914.732: 37.7773% ( 163) 00:21:15.033 7914.732 - 7965.145: 39.1978% ( 210) 00:21:15.033 7965.145 - 8015.557: 40.5777% ( 204) 00:21:15.033 8015.557 - 8065.969: 41.9981% ( 210) 00:21:15.033 8065.969 - 8116.382: 43.1548% ( 171) 00:21:15.033 8116.382 - 8166.794: 44.7308% ( 233) 00:21:15.033 8166.794 - 8217.206: 45.7522% ( 151) 00:21:15.033 8217.206 - 8267.618: 46.5706% ( 121) 00:21:15.033 8267.618 - 8318.031: 47.1861% ( 91) 00:21:15.033 8318.031 - 8368.443: 47.8626% ( 100) 00:21:15.033 8368.443 - 8418.855: 48.6675% ( 119) 00:21:15.033 8418.855 - 8469.268: 49.7565% ( 161) 00:21:15.033 8469.268 - 8519.680: 51.0958% ( 198) 00:21:15.033 8519.680 - 8570.092: 52.1307% ( 153) 00:21:15.033 8570.092 - 8620.505: 53.9299% ( 266) 00:21:15.033 8620.505 - 8670.917: 55.9186% ( 294) 00:21:15.033 8670.917 - 8721.329: 57.9478% ( 300) 00:21:15.033 8721.329 - 8771.742: 60.2881% ( 346) 00:21:15.033 8771.742 - 8822.154: 63.5823% ( 487) 00:21:15.033 8822.154 - 8872.566: 65.7535% ( 321) 00:21:15.033 8872.566 - 8922.978: 68.1006% ( 347) 00:21:15.033 8922.978 - 8973.391: 70.5628% ( 364) 00:21:15.033 8973.391 - 9023.803: 72.2335% ( 247) 00:21:15.033 9023.803 - 9074.215: 73.5457% ( 194) 00:21:15.033 9074.215 - 9124.628: 74.9865% ( 213) 00:21:15.033 9124.628 - 9175.040: 76.3731% ( 205) 00:21:15.033 9175.040 - 9225.452: 78.0303% ( 245) 00:21:15.033 9225.452 - 9275.865: 79.0584% ( 152) 00:21:15.033 9275.865 - 9326.277: 80.0595% ( 148) 00:21:15.033 9326.277 - 9376.689: 81.0741% ( 150) 00:21:15.033 9376.689 - 9427.102: 82.1970% ( 166) 00:21:15.033 9427.102 - 9477.514: 82.9275% ( 108) 00:21:15.033 9477.514 - 9527.926: 83.5971% ( 99) 00:21:15.033 9527.926 - 9578.338: 84.2803% ( 101) 00:21:15.033 9578.338 - 9628.751: 85.1393% ( 127) 00:21:15.033 9628.751 - 9679.163: 86.0457% ( 134) 00:21:15.033 9679.163 - 9729.575: 86.8371% ( 117) 00:21:15.033 9729.575 - 9779.988: 87.5203% ( 101) 00:21:15.033 9779.988 - 9830.400: 88.1494% ( 93) 00:21:15.033 9830.400 - 9880.812: 88.7987% ( 96) 00:21:15.033 9880.812 - 9931.225: 89.4954% ( 103) 00:21:15.033 9931.225 - 9981.637: 90.1448% ( 96) 00:21:15.033 9981.637 - 10032.049: 90.4897% ( 51) 00:21:15.033 10032.049 - 10082.462: 90.7535% ( 39) 00:21:15.033 10082.462 - 10132.874: 91.3961% ( 95) 00:21:15.033 10132.874 - 10183.286: 91.6396% ( 36) 00:21:15.033 10183.286 - 10233.698: 91.8899% ( 37) 00:21:15.033 10233.698 - 10284.111: 92.1402% ( 37) 00:21:15.033 10284.111 - 10334.523: 92.3972% ( 38) 00:21:15.033 10334.523 - 10384.935: 92.7151% ( 47) 00:21:15.033 10384.935 - 10435.348: 93.1480% ( 64) 00:21:15.033 10435.348 - 10485.760: 93.5335% ( 57) 00:21:15.033 10485.760 - 10536.172: 93.8515% ( 47) 00:21:15.033 10536.172 - 10586.585: 94.1964% ( 51) 00:21:15.033 10586.585 - 10636.997: 94.4602% ( 39) 00:21:15.033 10636.997 - 10687.409: 94.5887% ( 19) 00:21:15.033 10687.409 - 10737.822: 94.8458% ( 38) 00:21:15.033 10737.822 - 10788.234: 95.0149% ( 25) 00:21:15.033 10788.234 - 10838.646: 95.0893% ( 11) 00:21:15.033 10838.646 - 10889.058: 95.1705% ( 12) 00:21:15.033 10889.058 - 10939.471: 95.2381% ( 10) 00:21:15.033 10939.471 - 10989.883: 95.3125% ( 11) 00:21:15.033 10989.883 - 11040.295: 95.4613% ( 22) 00:21:15.033 11040.295 - 11090.708: 95.5425% ( 12) 00:21:15.033 11090.708 - 11141.120: 95.5628% ( 3) 00:21:15.033 11141.120 - 11191.532: 95.5831% ( 3) 00:21:15.033 11191.532 - 11241.945: 95.6034% ( 3) 00:21:15.033 11241.945 - 11292.357: 95.6372% ( 5) 00:21:15.033 11292.357 - 11342.769: 95.6575% ( 3) 00:21:15.033 11342.769 - 11393.182: 95.6710% ( 2) 00:21:15.033 11393.182 - 11443.594: 95.7454% ( 11) 00:21:15.033 11443.594 - 11494.006: 95.8604% ( 17) 00:21:15.033 11494.006 - 11544.418: 95.9077% ( 7) 00:21:15.033 11544.418 - 11594.831: 95.9483% ( 6) 00:21:15.033 11594.831 - 11645.243: 96.0092% ( 9) 00:21:15.033 11645.243 - 11695.655: 96.0768% ( 10) 00:21:15.033 11695.655 - 11746.068: 96.1310% ( 8) 00:21:15.033 11746.068 - 11796.480: 96.1918% ( 9) 00:21:15.033 11796.480 - 11846.892: 96.2865% ( 14) 00:21:15.033 11846.892 - 11897.305: 96.3812% ( 14) 00:21:15.033 11897.305 - 11947.717: 96.4962% ( 17) 00:21:15.033 11947.717 - 11998.129: 96.6112% ( 17) 00:21:15.033 11998.129 - 12048.542: 96.7803% ( 25) 00:21:15.033 12048.542 - 12098.954: 96.9494% ( 25) 00:21:15.033 12098.954 - 12149.366: 97.1117% ( 24) 00:21:15.033 12149.366 - 12199.778: 97.3147% ( 30) 00:21:15.033 12199.778 - 12250.191: 97.4499% ( 20) 00:21:15.033 12250.191 - 12300.603: 97.5852% ( 20) 00:21:15.033 12300.603 - 12351.015: 97.6799% ( 14) 00:21:15.033 12351.015 - 12401.428: 97.7408% ( 9) 00:21:15.033 12401.428 - 12451.840: 97.7814% ( 6) 00:21:15.033 12451.840 - 12502.252: 97.8287% ( 7) 00:21:15.033 12502.252 - 12552.665: 97.8693% ( 6) 00:21:15.033 12552.665 - 12603.077: 97.9167% ( 7) 00:21:15.033 12603.077 - 12653.489: 97.9708% ( 8) 00:21:15.033 12653.489 - 12703.902: 98.0181% ( 7) 00:21:15.033 12703.902 - 12754.314: 98.0790% ( 9) 00:21:15.033 12754.314 - 12804.726: 98.1399% ( 9) 00:21:15.033 12804.726 - 12855.138: 98.1940% ( 8) 00:21:15.033 12855.138 - 12905.551: 98.2278% ( 5) 00:21:15.033 12905.551 - 13006.375: 98.3699% ( 21) 00:21:15.033 13006.375 - 13107.200: 98.3902% ( 3) 00:21:15.033 13107.200 - 13208.025: 98.4307% ( 6) 00:21:15.033 13208.025 - 13308.849: 98.4713% ( 6) 00:21:15.033 13308.849 - 13409.674: 98.5119% ( 6) 00:21:15.033 13409.674 - 13510.498: 98.5931% ( 12) 00:21:15.033 13510.498 - 13611.323: 98.6742% ( 12) 00:21:15.033 13611.323 - 13712.148: 98.7419% ( 10) 00:21:15.033 13712.148 - 13812.972: 98.8231% ( 12) 00:21:15.033 13812.972 - 13913.797: 98.8975% ( 11) 00:21:15.033 13913.797 - 14014.622: 98.9448% ( 7) 00:21:15.033 14014.622 - 14115.446: 98.9922% ( 7) 00:21:15.033 14115.446 - 14216.271: 99.0327% ( 6) 00:21:15.033 14216.271 - 14317.095: 99.0801% ( 7) 00:21:15.033 14317.095 - 14417.920: 99.1139% ( 5) 00:21:15.033 14417.920 - 14518.745: 99.1342% ( 3) 00:21:15.033 21778.117 - 21878.942: 99.1410% ( 1) 00:21:15.033 22181.415 - 22282.240: 99.1477% ( 1) 00:21:15.033 22584.714 - 22685.538: 99.1613% ( 2) 00:21:15.033 22685.538 - 22786.363: 99.1883% ( 4) 00:21:15.033 22786.363 - 22887.188: 99.1951% ( 1) 00:21:15.033 22887.188 - 22988.012: 99.2424% ( 7) 00:21:15.033 22988.012 - 23088.837: 99.3439% ( 15) 00:21:15.033 23088.837 - 23189.662: 99.3574% ( 2) 00:21:15.033 23189.662 - 23290.486: 99.3777% ( 3) 00:21:15.033 23290.486 - 23391.311: 99.3912% ( 2) 00:21:15.033 23391.311 - 23492.135: 99.4115% ( 3) 00:21:15.033 23492.135 - 23592.960: 99.4318% ( 3) 00:21:15.033 23592.960 - 23693.785: 99.4521% ( 3) 00:21:15.033 23693.785 - 23794.609: 99.4724% ( 3) 00:21:15.033 23794.609 - 23895.434: 99.4927% ( 3) 00:21:15.033 23895.434 - 23996.258: 99.5062% ( 2) 00:21:15.033 23996.258 - 24097.083: 99.5265% ( 3) 00:21:15.033 24097.083 - 24197.908: 99.5400% ( 2) 00:21:15.033 24197.908 - 24298.732: 99.5603% ( 3) 00:21:15.033 24298.732 - 24399.557: 99.5671% ( 1) 00:21:15.033 27222.646 - 27424.295: 99.6618% ( 14) 00:21:15.033 27424.295 - 27625.945: 99.7091% ( 7) 00:21:15.033 27625.945 - 27827.594: 99.7430% ( 5) 00:21:15.033 27827.594 - 28029.243: 99.7971% ( 8) 00:21:15.033 28029.243 - 28230.892: 99.8580% ( 9) 00:21:15.033 28230.892 - 28432.542: 99.8715% ( 2) 00:21:15.033 28432.542 - 28634.191: 99.8782% ( 1) 00:21:15.033 28634.191 - 28835.840: 99.8918% ( 2) 00:21:15.033 28835.840 - 29037.489: 99.9932% ( 15) 00:21:15.033 29037.489 - 29239.138: 100.0000% ( 1) 00:21:15.033 00:21:15.033 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:21:15.033 ============================================================================== 00:21:15.033 Range in us Cumulative IO count 00:21:15.033 6402.363 - 6427.569: 0.0068% ( 1) 00:21:15.033 6427.569 - 6452.775: 0.0135% ( 1) 00:21:15.033 6452.775 - 6503.188: 0.0609% ( 7) 00:21:15.033 6503.188 - 6553.600: 0.1623% ( 15) 00:21:15.033 6553.600 - 6604.012: 0.3044% ( 21) 00:21:15.033 6604.012 - 6654.425: 0.6020% ( 44) 00:21:15.033 6654.425 - 6704.837: 0.8387% ( 35) 00:21:15.033 6704.837 - 6755.249: 1.1972% ( 53) 00:21:15.033 6755.249 - 6805.662: 1.6504% ( 67) 00:21:15.033 6805.662 - 6856.074: 2.3268% ( 100) 00:21:15.033 6856.074 - 6906.486: 3.2738% ( 140) 00:21:15.033 6906.486 - 6956.898: 4.4169% ( 169) 00:21:15.033 6956.898 - 7007.311: 5.6615% ( 184) 00:21:15.033 7007.311 - 7057.723: 8.0290% ( 350) 00:21:15.033 7057.723 - 7108.135: 9.8688% ( 272) 00:21:15.033 7108.135 - 7158.548: 11.5192% ( 244) 00:21:15.033 7158.548 - 7208.960: 13.8393% ( 343) 00:21:15.033 7208.960 - 7259.372: 15.7806% ( 287) 00:21:15.033 7259.372 - 7309.785: 17.6948% ( 283) 00:21:15.033 7309.785 - 7360.197: 19.7511% ( 304) 00:21:15.033 7360.197 - 7410.609: 21.8547% ( 311) 00:21:15.033 7410.609 - 7461.022: 24.7430% ( 427) 00:21:15.033 7461.022 - 7511.434: 27.1848% ( 361) 00:21:15.033 7511.434 - 7561.846: 28.4903% ( 193) 00:21:15.033 7561.846 - 7612.258: 29.9851% ( 221) 00:21:15.033 7612.258 - 7662.671: 31.4259% ( 213) 00:21:15.033 7662.671 - 7713.083: 32.6772% ( 185) 00:21:15.033 7713.083 - 7763.495: 33.9489% ( 188) 00:21:15.033 7763.495 - 7813.908: 35.2205% ( 188) 00:21:15.033 7813.908 - 7864.320: 36.7695% ( 229) 00:21:15.033 7864.320 - 7914.732: 37.8720% ( 163) 00:21:15.033 7914.732 - 7965.145: 38.9678% ( 162) 00:21:15.033 7965.145 - 8015.557: 40.2665% ( 192) 00:21:15.033 8015.557 - 8065.969: 41.6734% ( 208) 00:21:15.033 8065.969 - 8116.382: 42.9992% ( 196) 00:21:15.033 8116.382 - 8166.794: 44.3926% ( 206) 00:21:15.033 8166.794 - 8217.206: 45.3463% ( 141) 00:21:15.033 8217.206 - 8267.618: 46.3609% ( 150) 00:21:15.033 8267.618 - 8318.031: 47.0915% ( 108) 00:21:15.033 8318.031 - 8368.443: 47.8896% ( 118) 00:21:15.033 8368.443 - 8418.855: 48.9110% ( 151) 00:21:15.033 8418.855 - 8469.268: 50.3991% ( 220) 00:21:15.033 8469.268 - 8519.680: 51.8128% ( 209) 00:21:15.033 8519.680 - 8570.092: 52.9491% ( 168) 00:21:15.033 8570.092 - 8620.505: 54.7010% ( 259) 00:21:15.033 8620.505 - 8670.917: 56.7235% ( 299) 00:21:15.033 8670.917 - 8721.329: 58.7189% ( 295) 00:21:15.033 8721.329 - 8771.742: 60.5587% ( 272) 00:21:15.033 8771.742 - 8822.154: 62.7638% ( 326) 00:21:15.033 8822.154 - 8872.566: 65.3747% ( 386) 00:21:15.033 8872.566 - 8922.978: 67.6745% ( 340) 00:21:15.033 8922.978 - 8973.391: 70.0758% ( 355) 00:21:15.033 8973.391 - 9023.803: 71.9223% ( 273) 00:21:15.033 9023.803 - 9074.215: 73.6066% ( 249) 00:21:15.033 9074.215 - 9124.628: 75.3991% ( 265) 00:21:15.033 9124.628 - 9175.040: 76.6843% ( 190) 00:21:15.033 9175.040 - 9225.452: 77.9085% ( 181) 00:21:15.033 9225.452 - 9275.865: 78.8893% ( 145) 00:21:15.033 9275.865 - 9326.277: 80.2219% ( 197) 00:21:15.033 9326.277 - 9376.689: 80.9321% ( 105) 00:21:15.033 9376.689 - 9427.102: 81.6423% ( 105) 00:21:15.033 9427.102 - 9477.514: 82.3458% ( 104) 00:21:15.033 9477.514 - 9527.926: 83.2048% ( 127) 00:21:15.033 9527.926 - 9578.338: 84.0097% ( 119) 00:21:15.033 9578.338 - 9628.751: 84.7200% ( 105) 00:21:15.033 9628.751 - 9679.163: 85.7413% ( 151) 00:21:15.033 9679.163 - 9729.575: 86.3907% ( 96) 00:21:15.034 9729.575 - 9779.988: 86.9656% ( 85) 00:21:15.034 9779.988 - 9830.400: 87.7706% ( 119) 00:21:15.034 9830.400 - 9880.812: 88.4267% ( 97) 00:21:15.034 9880.812 - 9931.225: 89.2045% ( 115) 00:21:15.034 9931.225 - 9981.637: 89.8810% ( 100) 00:21:15.034 9981.637 - 10032.049: 90.4559% ( 85) 00:21:15.034 10032.049 - 10082.462: 91.0173% ( 83) 00:21:15.034 10082.462 - 10132.874: 91.3014% ( 42) 00:21:15.034 10132.874 - 10183.286: 91.9305% ( 93) 00:21:15.034 10183.286 - 10233.698: 92.2890% ( 53) 00:21:15.034 10233.698 - 10284.111: 92.5460% ( 38) 00:21:15.034 10284.111 - 10334.523: 92.9045% ( 53) 00:21:15.034 10334.523 - 10384.935: 93.2089% ( 45) 00:21:15.034 10384.935 - 10435.348: 93.4253% ( 32) 00:21:15.034 10435.348 - 10485.760: 93.7094% ( 42) 00:21:15.034 10485.760 - 10536.172: 93.8379% ( 19) 00:21:15.034 10536.172 - 10586.585: 94.1694% ( 49) 00:21:15.034 10586.585 - 10636.997: 94.2235% ( 8) 00:21:15.034 10636.997 - 10687.409: 94.2708% ( 7) 00:21:15.034 10687.409 - 10737.822: 94.3520% ( 12) 00:21:15.034 10737.822 - 10788.234: 94.4399% ( 13) 00:21:15.034 10788.234 - 10838.646: 94.5143% ( 11) 00:21:15.034 10838.646 - 10889.058: 94.5955% ( 12) 00:21:15.034 10889.058 - 10939.471: 94.6429% ( 7) 00:21:15.034 10939.471 - 10989.883: 94.8864% ( 36) 00:21:15.034 10989.883 - 11040.295: 95.0419% ( 23) 00:21:15.034 11040.295 - 11090.708: 95.1163% ( 11) 00:21:15.034 11090.708 - 11141.120: 95.1772% ( 9) 00:21:15.034 11141.120 - 11191.532: 95.3531% ( 26) 00:21:15.034 11191.532 - 11241.945: 95.4748% ( 18) 00:21:15.034 11241.945 - 11292.357: 95.5357% ( 9) 00:21:15.034 11292.357 - 11342.769: 95.5898% ( 8) 00:21:15.034 11342.769 - 11393.182: 95.6236% ( 5) 00:21:15.034 11393.182 - 11443.594: 95.6642% ( 6) 00:21:15.034 11443.594 - 11494.006: 95.7589% ( 14) 00:21:15.034 11494.006 - 11544.418: 95.8672% ( 16) 00:21:15.034 11544.418 - 11594.831: 95.9280% ( 9) 00:21:15.034 11594.831 - 11645.243: 95.9821% ( 8) 00:21:15.034 11645.243 - 11695.655: 96.0565% ( 11) 00:21:15.034 11695.655 - 11746.068: 96.1107% ( 8) 00:21:15.034 11746.068 - 11796.480: 96.1918% ( 12) 00:21:15.034 11796.480 - 11846.892: 96.3880% ( 29) 00:21:15.034 11846.892 - 11897.305: 96.5368% ( 22) 00:21:15.034 11897.305 - 11947.717: 96.6450% ( 16) 00:21:15.034 11947.717 - 11998.129: 96.8277% ( 27) 00:21:15.034 11998.129 - 12048.542: 96.9900% ( 24) 00:21:15.034 12048.542 - 12098.954: 97.1253% ( 20) 00:21:15.034 12098.954 - 12149.366: 97.2403% ( 17) 00:21:15.034 12149.366 - 12199.778: 97.3755% ( 20) 00:21:15.034 12199.778 - 12250.191: 97.4905% ( 17) 00:21:15.034 12250.191 - 12300.603: 97.6055% ( 17) 00:21:15.034 12300.603 - 12351.015: 97.7002% ( 14) 00:21:15.034 12351.015 - 12401.428: 97.7611% ( 9) 00:21:15.034 12401.428 - 12451.840: 97.8355% ( 11) 00:21:15.034 12451.840 - 12502.252: 97.9099% ( 11) 00:21:15.034 12502.252 - 12552.665: 97.9708% ( 9) 00:21:15.034 12552.665 - 12603.077: 98.0519% ( 12) 00:21:15.034 12603.077 - 12653.489: 98.1061% ( 8) 00:21:15.034 12653.489 - 12703.902: 98.1534% ( 7) 00:21:15.034 12703.902 - 12754.314: 98.1669% ( 2) 00:21:15.034 12754.314 - 12804.726: 98.1872% ( 3) 00:21:15.034 12804.726 - 12855.138: 98.2549% ( 10) 00:21:15.034 12855.138 - 12905.551: 98.3225% ( 10) 00:21:15.034 12905.551 - 13006.375: 98.3969% ( 11) 00:21:15.034 13006.375 - 13107.200: 98.4240% ( 4) 00:21:15.034 13107.200 - 13208.025: 98.4713% ( 7) 00:21:15.034 13208.025 - 13308.849: 98.5728% ( 15) 00:21:15.034 13308.849 - 13409.674: 98.6404% ( 10) 00:21:15.034 13409.674 - 13510.498: 98.7216% ( 12) 00:21:15.034 13510.498 - 13611.323: 98.8028% ( 12) 00:21:15.034 13611.323 - 13712.148: 98.8839% ( 12) 00:21:15.034 13712.148 - 13812.972: 98.9583% ( 11) 00:21:15.034 13812.972 - 13913.797: 99.0057% ( 7) 00:21:15.034 13913.797 - 14014.622: 99.0463% ( 6) 00:21:15.034 14014.622 - 14115.446: 99.0936% ( 7) 00:21:15.034 14115.446 - 14216.271: 99.1342% ( 6) 00:21:15.034 21677.292 - 21778.117: 99.1545% ( 3) 00:21:15.034 21778.117 - 21878.942: 99.1748% ( 3) 00:21:15.034 21878.942 - 21979.766: 99.1951% ( 3) 00:21:15.034 21979.766 - 22080.591: 99.2898% ( 14) 00:21:15.034 22080.591 - 22181.415: 99.4859% ( 29) 00:21:15.034 22181.415 - 22282.240: 99.5130% ( 4) 00:21:15.034 22282.240 - 22383.065: 99.5333% ( 3) 00:21:15.034 22383.065 - 22483.889: 99.5468% ( 2) 00:21:15.034 22483.889 - 22584.714: 99.5603% ( 2) 00:21:15.034 22584.714 - 22685.538: 99.5671% ( 1) 00:21:15.034 25710.277 - 25811.102: 99.5739% ( 1) 00:21:15.034 25811.102 - 26012.751: 99.6280% ( 8) 00:21:15.034 26012.751 - 26214.400: 99.6889% ( 9) 00:21:15.034 26214.400 - 26416.049: 99.8715% ( 27) 00:21:15.034 26416.049 - 26617.698: 100.0000% ( 19) 00:21:15.034 00:21:15.034 ************************************ 00:21:15.034 END TEST nvme_perf 00:21:15.034 ************************************ 00:21:15.034 20:19:09 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:21:15.034 00:21:15.034 real 0m2.510s 00:21:15.034 user 0m2.192s 00:21:15.034 sys 0m0.207s 00:21:15.034 20:19:09 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:15.034 20:19:09 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:21:15.034 20:19:09 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:21:15.034 20:19:09 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:15.034 20:19:09 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:15.034 20:19:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:15.034 ************************************ 00:21:15.034 START TEST nvme_hello_world 00:21:15.034 ************************************ 00:21:15.034 20:19:09 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:21:15.034 Initializing NVMe Controllers 00:21:15.034 Attached to 0000:00:10.0 00:21:15.034 Namespace ID: 1 size: 6GB 00:21:15.034 Attached to 0000:00:11.0 00:21:15.034 Namespace ID: 1 size: 5GB 00:21:15.034 Attached to 0000:00:13.0 00:21:15.034 Namespace ID: 1 size: 1GB 00:21:15.034 Attached to 0000:00:12.0 00:21:15.034 Namespace ID: 1 size: 4GB 00:21:15.034 Namespace ID: 2 size: 4GB 00:21:15.034 Namespace ID: 3 size: 4GB 00:21:15.034 Initialization complete. 00:21:15.034 INFO: using host memory buffer for IO 00:21:15.034 Hello world! 00:21:15.034 INFO: using host memory buffer for IO 00:21:15.034 Hello world! 00:21:15.034 INFO: using host memory buffer for IO 00:21:15.034 Hello world! 00:21:15.034 INFO: using host memory buffer for IO 00:21:15.034 Hello world! 00:21:15.034 INFO: using host memory buffer for IO 00:21:15.034 Hello world! 00:21:15.034 INFO: using host memory buffer for IO 00:21:15.034 Hello world! 00:21:15.034 00:21:15.034 real 0m0.225s 00:21:15.034 user 0m0.071s 00:21:15.034 sys 0m0.101s 00:21:15.034 ************************************ 00:21:15.034 END TEST nvme_hello_world 00:21:15.034 ************************************ 00:21:15.034 20:19:10 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:15.034 20:19:10 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:15.034 20:19:10 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:21:15.034 20:19:10 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:15.034 20:19:10 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:15.034 20:19:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:15.034 ************************************ 00:21:15.034 START TEST nvme_sgl 00:21:15.034 ************************************ 00:21:15.034 20:19:10 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:21:15.291 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:21:15.291 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:21:15.291 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:21:15.291 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:21:15.291 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:21:15.291 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:21:15.291 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:21:15.291 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:21:15.291 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:21:15.291 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:21:15.291 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:21:15.291 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:21:15.291 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:21:15.291 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:21:15.291 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:21:15.291 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:21:15.291 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:21:15.291 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:21:15.291 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:21:15.291 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:21:15.291 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:21:15.291 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:21:15.291 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:21:15.291 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:21:15.291 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:21:15.291 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:21:15.291 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:21:15.291 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:21:15.291 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:21:15.291 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:21:15.291 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:21:15.291 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:21:15.291 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:21:15.291 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:21:15.291 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:21:15.291 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:21:15.291 NVMe Readv/Writev Request test 00:21:15.291 Attached to 0000:00:10.0 00:21:15.291 Attached to 0000:00:11.0 00:21:15.291 Attached to 0000:00:13.0 00:21:15.291 Attached to 0000:00:12.0 00:21:15.291 0000:00:10.0: build_io_request_2 test passed 00:21:15.291 0000:00:10.0: build_io_request_4 test passed 00:21:15.291 0000:00:10.0: build_io_request_5 test passed 00:21:15.291 0000:00:10.0: build_io_request_6 test passed 00:21:15.291 0000:00:10.0: build_io_request_7 test passed 00:21:15.291 0000:00:10.0: build_io_request_10 test passed 00:21:15.291 0000:00:11.0: build_io_request_2 test passed 00:21:15.291 0000:00:11.0: build_io_request_4 test passed 00:21:15.291 0000:00:11.0: build_io_request_5 test passed 00:21:15.291 0000:00:11.0: build_io_request_6 test passed 00:21:15.291 0000:00:11.0: build_io_request_7 test passed 00:21:15.291 0000:00:11.0: build_io_request_10 test passed 00:21:15.291 Cleaning up... 00:21:15.291 00:21:15.291 real 0m0.289s 00:21:15.291 user 0m0.152s 00:21:15.291 sys 0m0.092s 00:21:15.291 20:19:10 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:15.291 20:19:10 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:21:15.291 ************************************ 00:21:15.291 END TEST nvme_sgl 00:21:15.291 ************************************ 00:21:15.548 20:19:10 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:21:15.548 20:19:10 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:15.548 20:19:10 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:15.548 20:19:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:15.548 ************************************ 00:21:15.548 START TEST nvme_e2edp 00:21:15.548 ************************************ 00:21:15.548 20:19:10 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:21:15.548 NVMe Write/Read with End-to-End data protection test 00:21:15.548 Attached to 0000:00:10.0 00:21:15.548 Attached to 0000:00:11.0 00:21:15.548 Attached to 0000:00:13.0 00:21:15.548 Attached to 0000:00:12.0 00:21:15.548 Cleaning up... 00:21:15.548 00:21:15.548 real 0m0.208s 00:21:15.548 user 0m0.069s 00:21:15.548 sys 0m0.097s 00:21:15.548 20:19:10 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:15.548 20:19:10 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:21:15.549 ************************************ 00:21:15.549 END TEST nvme_e2edp 00:21:15.549 ************************************ 00:21:15.806 20:19:10 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:21:15.806 20:19:10 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:15.806 20:19:10 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:15.806 20:19:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:15.806 ************************************ 00:21:15.806 START TEST nvme_reserve 00:21:15.806 ************************************ 00:21:15.806 20:19:10 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:21:15.806 ===================================================== 00:21:15.806 NVMe Controller at PCI bus 0, device 16, function 0 00:21:15.806 ===================================================== 00:21:15.806 Reservations: Not Supported 00:21:15.806 ===================================================== 00:21:15.806 NVMe Controller at PCI bus 0, device 17, function 0 00:21:15.806 ===================================================== 00:21:15.806 Reservations: Not Supported 00:21:15.806 ===================================================== 00:21:15.806 NVMe Controller at PCI bus 0, device 19, function 0 00:21:15.806 ===================================================== 00:21:15.806 Reservations: Not Supported 00:21:15.806 ===================================================== 00:21:15.806 NVMe Controller at PCI bus 0, device 18, function 0 00:21:15.806 ===================================================== 00:21:15.806 Reservations: Not Supported 00:21:15.806 Reservation test passed 00:21:15.806 00:21:15.806 real 0m0.207s 00:21:15.806 user 0m0.051s 00:21:15.806 sys 0m0.105s 00:21:15.806 20:19:10 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:15.806 20:19:10 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:21:15.806 ************************************ 00:21:15.806 END TEST nvme_reserve 00:21:15.806 ************************************ 00:21:16.063 20:19:11 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:21:16.063 20:19:11 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:16.063 20:19:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:16.063 20:19:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:16.063 ************************************ 00:21:16.063 START TEST nvme_err_injection 00:21:16.063 ************************************ 00:21:16.063 20:19:11 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:21:16.063 NVMe Error Injection test 00:21:16.063 Attached to 0000:00:10.0 00:21:16.063 Attached to 0000:00:11.0 00:21:16.063 Attached to 0000:00:13.0 00:21:16.063 Attached to 0000:00:12.0 00:21:16.063 0000:00:10.0: get features failed as expected 00:21:16.063 0000:00:11.0: get features failed as expected 00:21:16.063 0000:00:13.0: get features failed as expected 00:21:16.063 0000:00:12.0: get features failed as expected 00:21:16.063 0000:00:10.0: get features successfully as expected 00:21:16.063 0000:00:11.0: get features successfully as expected 00:21:16.063 0000:00:13.0: get features successfully as expected 00:21:16.063 0000:00:12.0: get features successfully as expected 00:21:16.063 0000:00:10.0: read failed as expected 00:21:16.063 0000:00:11.0: read failed as expected 00:21:16.063 0000:00:13.0: read failed as expected 00:21:16.063 0000:00:12.0: read failed as expected 00:21:16.063 0000:00:10.0: read successfully as expected 00:21:16.063 0000:00:11.0: read successfully as expected 00:21:16.063 0000:00:13.0: read successfully as expected 00:21:16.063 0000:00:12.0: read successfully as expected 00:21:16.063 Cleaning up... 00:21:16.063 ************************************ 00:21:16.063 END TEST nvme_err_injection 00:21:16.063 ************************************ 00:21:16.063 00:21:16.063 real 0m0.236s 00:21:16.063 user 0m0.089s 00:21:16.063 sys 0m0.098s 00:21:16.063 20:19:11 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:16.063 20:19:11 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:21:16.321 20:19:11 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:21:16.321 20:19:11 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:21:16.321 20:19:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:16.321 20:19:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:16.321 ************************************ 00:21:16.321 START TEST nvme_overhead 00:21:16.321 ************************************ 00:21:16.321 20:19:11 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:21:17.752 Initializing NVMe Controllers 00:21:17.752 Attached to 0000:00:10.0 00:21:17.752 Attached to 0000:00:11.0 00:21:17.752 Attached to 0000:00:13.0 00:21:17.752 Attached to 0000:00:12.0 00:21:17.752 Initialization complete. Launching workers. 00:21:17.752 submit (in ns) avg, min, max = 12372.0, 9983.8, 349483.1 00:21:17.752 complete (in ns) avg, min, max = 8338.4, 7174.6, 40893.1 00:21:17.752 00:21:17.752 Submit histogram 00:21:17.752 ================ 00:21:17.752 Range in us Cumulative Count 00:21:17.752 9.945 - 9.994: 0.0063% ( 1) 00:21:17.752 10.092 - 10.142: 0.0125% ( 1) 00:21:17.752 10.142 - 10.191: 0.0188% ( 1) 00:21:17.752 10.191 - 10.240: 0.0251% ( 1) 00:21:17.752 10.585 - 10.634: 0.0313% ( 1) 00:21:17.752 10.634 - 10.683: 0.0438% ( 2) 00:21:17.752 10.683 - 10.732: 0.1754% ( 21) 00:21:17.752 10.732 - 10.782: 0.8455% ( 107) 00:21:17.752 10.782 - 10.831: 2.0354% ( 190) 00:21:17.752 10.831 - 10.880: 4.4216% ( 381) 00:21:17.752 10.880 - 10.929: 9.1376% ( 753) 00:21:17.752 10.929 - 10.978: 16.3274% ( 1148) 00:21:17.752 10.978 - 11.028: 25.4963% ( 1464) 00:21:17.752 11.028 - 11.077: 35.4481% ( 1589) 00:21:17.752 11.077 - 11.126: 43.6087% ( 1303) 00:21:17.752 11.126 - 11.175: 49.8779% ( 1001) 00:21:17.752 11.175 - 11.225: 54.4749% ( 734) 00:21:17.752 11.225 - 11.274: 57.8756% ( 543) 00:21:17.752 11.274 - 11.323: 60.0363% ( 345) 00:21:17.752 11.323 - 11.372: 61.6459% ( 257) 00:21:17.752 11.372 - 11.422: 62.7231% ( 172) 00:21:17.753 11.422 - 11.471: 63.7252% ( 160) 00:21:17.753 11.471 - 11.520: 64.5456% ( 131) 00:21:17.753 11.520 - 11.569: 65.4475% ( 144) 00:21:17.753 11.569 - 11.618: 66.1865% ( 118) 00:21:17.753 11.618 - 11.668: 66.9694% ( 125) 00:21:17.753 11.668 - 11.717: 67.6270% ( 105) 00:21:17.753 11.717 - 11.766: 68.3159% ( 110) 00:21:17.753 11.766 - 11.815: 69.0236% ( 113) 00:21:17.753 11.815 - 11.865: 69.8065% ( 125) 00:21:17.753 11.865 - 11.914: 70.5142% ( 113) 00:21:17.753 11.914 - 11.963: 71.2595% ( 119) 00:21:17.753 11.963 - 12.012: 72.0110% ( 120) 00:21:17.753 12.012 - 12.062: 72.7876% ( 124) 00:21:17.753 12.062 - 12.111: 73.7709% ( 157) 00:21:17.753 12.111 - 12.160: 74.7730% ( 160) 00:21:17.753 12.160 - 12.209: 75.6811% ( 145) 00:21:17.753 12.209 - 12.258: 76.4702% ( 126) 00:21:17.753 12.258 - 12.308: 77.0464% ( 92) 00:21:17.753 12.308 - 12.357: 77.5349% ( 78) 00:21:17.753 12.357 - 12.406: 77.9232% ( 62) 00:21:17.753 12.406 - 12.455: 78.1800% ( 41) 00:21:17.753 12.455 - 12.505: 78.3867% ( 33) 00:21:17.753 12.505 - 12.554: 78.4994% ( 18) 00:21:17.753 12.554 - 12.603: 78.6372% ( 22) 00:21:17.753 12.603 - 12.702: 78.7875% ( 24) 00:21:17.753 12.702 - 12.800: 78.8439% ( 9) 00:21:17.753 12.800 - 12.898: 78.8940% ( 8) 00:21:17.753 12.898 - 12.997: 78.9629% ( 11) 00:21:17.753 12.997 - 13.095: 79.0192% ( 9) 00:21:17.753 13.095 - 13.194: 79.0819% ( 10) 00:21:17.753 13.194 - 13.292: 79.1257% ( 7) 00:21:17.753 13.292 - 13.391: 79.2009% ( 12) 00:21:17.753 13.391 - 13.489: 79.2384% ( 6) 00:21:17.753 13.489 - 13.588: 79.3011% ( 10) 00:21:17.753 13.588 - 13.686: 79.3637% ( 10) 00:21:17.753 13.686 - 13.785: 79.4013% ( 6) 00:21:17.753 13.785 - 13.883: 79.4702% ( 11) 00:21:17.753 13.883 - 13.982: 79.5265% ( 9) 00:21:17.753 13.982 - 14.080: 79.5704% ( 7) 00:21:17.753 14.080 - 14.178: 79.5954% ( 4) 00:21:17.753 14.178 - 14.277: 79.6330% ( 6) 00:21:17.753 14.277 - 14.375: 79.6768% ( 7) 00:21:17.753 14.375 - 14.474: 79.7583% ( 13) 00:21:17.753 14.474 - 14.572: 79.8585% ( 16) 00:21:17.753 14.572 - 14.671: 79.9775% ( 19) 00:21:17.753 14.671 - 14.769: 80.1904% ( 34) 00:21:17.753 14.769 - 14.868: 80.4221% ( 37) 00:21:17.753 14.868 - 14.966: 80.8229% ( 64) 00:21:17.753 14.966 - 15.065: 81.2300% ( 65) 00:21:17.753 15.065 - 15.163: 81.8062% ( 92) 00:21:17.753 15.163 - 15.262: 82.4826% ( 108) 00:21:17.753 15.262 - 15.360: 83.2717% ( 126) 00:21:17.753 15.360 - 15.458: 84.1861% ( 146) 00:21:17.753 15.458 - 15.557: 85.4199% ( 197) 00:21:17.753 15.557 - 15.655: 86.8604% ( 230) 00:21:17.753 15.655 - 15.754: 88.3510% ( 238) 00:21:17.753 15.754 - 15.852: 90.0357% ( 269) 00:21:17.753 15.852 - 15.951: 91.2257% ( 190) 00:21:17.753 15.951 - 16.049: 92.2340% ( 161) 00:21:17.753 16.049 - 16.148: 92.9918% ( 121) 00:21:17.753 16.148 - 16.246: 93.6682% ( 108) 00:21:17.753 16.246 - 16.345: 94.2131% ( 87) 00:21:17.753 16.345 - 16.443: 94.6389% ( 68) 00:21:17.753 16.443 - 16.542: 95.0586% ( 67) 00:21:17.753 16.542 - 16.640: 95.4155% ( 57) 00:21:17.753 16.640 - 16.738: 95.8164% ( 64) 00:21:17.753 16.738 - 16.837: 96.2422% ( 68) 00:21:17.753 16.837 - 16.935: 96.7182% ( 76) 00:21:17.753 16.935 - 17.034: 96.9687% ( 40) 00:21:17.753 17.034 - 17.132: 97.1942% ( 36) 00:21:17.753 17.132 - 17.231: 97.3946% ( 32) 00:21:17.753 17.231 - 17.329: 97.5700% ( 28) 00:21:17.753 17.329 - 17.428: 97.6952% ( 20) 00:21:17.753 17.428 - 17.526: 97.8518% ( 25) 00:21:17.753 17.526 - 17.625: 97.9144% ( 10) 00:21:17.753 17.625 - 17.723: 98.0397% ( 20) 00:21:17.753 17.723 - 17.822: 98.1337% ( 15) 00:21:17.753 17.822 - 17.920: 98.2339% ( 16) 00:21:17.753 17.920 - 18.018: 98.2526% ( 3) 00:21:17.753 18.018 - 18.117: 98.2840% ( 5) 00:21:17.753 18.117 - 18.215: 98.3278% ( 7) 00:21:17.753 18.215 - 18.314: 98.4030% ( 12) 00:21:17.753 18.314 - 18.412: 98.4531% ( 8) 00:21:17.753 18.412 - 18.511: 98.4718% ( 3) 00:21:17.753 18.511 - 18.609: 98.5157% ( 7) 00:21:17.753 18.609 - 18.708: 98.5470% ( 5) 00:21:17.753 18.708 - 18.806: 98.5908% ( 7) 00:21:17.753 18.806 - 18.905: 98.6034% ( 2) 00:21:17.753 18.905 - 19.003: 98.6347% ( 5) 00:21:17.753 19.003 - 19.102: 98.6535% ( 3) 00:21:17.753 19.102 - 19.200: 98.6723% ( 3) 00:21:17.753 19.200 - 19.298: 98.6785% ( 1) 00:21:17.753 19.298 - 19.397: 98.6848% ( 1) 00:21:17.753 19.397 - 19.495: 98.7098% ( 4) 00:21:17.753 19.495 - 19.594: 98.7161% ( 1) 00:21:17.753 19.594 - 19.692: 98.7412% ( 4) 00:21:17.753 19.791 - 19.889: 98.7599% ( 3) 00:21:17.753 19.988 - 20.086: 98.7725% ( 2) 00:21:17.753 20.086 - 20.185: 98.7850% ( 2) 00:21:17.753 20.185 - 20.283: 98.8100% ( 4) 00:21:17.753 20.578 - 20.677: 98.8288% ( 3) 00:21:17.753 20.677 - 20.775: 98.8414% ( 2) 00:21:17.753 20.874 - 20.972: 98.8476% ( 1) 00:21:17.753 20.972 - 21.071: 98.8664% ( 3) 00:21:17.753 21.169 - 21.268: 98.8915% ( 4) 00:21:17.753 21.268 - 21.366: 98.9165% ( 4) 00:21:17.753 21.366 - 21.465: 98.9290% ( 2) 00:21:17.753 21.465 - 21.563: 98.9478% ( 3) 00:21:17.753 21.563 - 21.662: 98.9729% ( 4) 00:21:17.753 21.662 - 21.760: 99.0105% ( 6) 00:21:17.753 21.760 - 21.858: 99.0230% ( 2) 00:21:17.753 21.858 - 21.957: 99.0543% ( 5) 00:21:17.753 21.957 - 22.055: 99.0856% ( 5) 00:21:17.753 22.055 - 22.154: 99.1107% ( 4) 00:21:17.753 22.154 - 22.252: 99.1169% ( 1) 00:21:17.753 22.351 - 22.449: 99.1232% ( 1) 00:21:17.753 22.548 - 22.646: 99.1420% ( 3) 00:21:17.753 22.646 - 22.745: 99.1608% ( 3) 00:21:17.753 22.745 - 22.843: 99.1858% ( 4) 00:21:17.753 22.843 - 22.942: 99.2046% ( 3) 00:21:17.753 22.942 - 23.040: 99.2171% ( 2) 00:21:17.753 23.040 - 23.138: 99.2234% ( 1) 00:21:17.753 23.138 - 23.237: 99.2422% ( 3) 00:21:17.753 23.237 - 23.335: 99.2484% ( 1) 00:21:17.753 23.335 - 23.434: 99.2547% ( 1) 00:21:17.753 23.434 - 23.532: 99.2610% ( 1) 00:21:17.753 23.729 - 23.828: 99.2798% ( 3) 00:21:17.753 23.828 - 23.926: 99.2923% ( 2) 00:21:17.753 23.926 - 24.025: 99.3048% ( 2) 00:21:17.753 24.123 - 24.222: 99.3173% ( 2) 00:21:17.753 24.222 - 24.320: 99.3424% ( 4) 00:21:17.753 24.418 - 24.517: 99.3549% ( 2) 00:21:17.753 24.517 - 24.615: 99.3674% ( 2) 00:21:17.753 24.615 - 24.714: 99.3862% ( 3) 00:21:17.753 24.812 - 24.911: 99.3925% ( 1) 00:21:17.753 24.911 - 25.009: 99.4050% ( 2) 00:21:17.753 25.206 - 25.403: 99.4363% ( 5) 00:21:17.753 25.403 - 25.600: 99.4677% ( 5) 00:21:17.753 25.600 - 25.797: 99.4739% ( 1) 00:21:17.753 25.797 - 25.994: 99.4927% ( 3) 00:21:17.753 25.994 - 26.191: 99.5052% ( 2) 00:21:17.753 26.191 - 26.388: 99.5365% ( 5) 00:21:17.753 26.388 - 26.585: 99.5428% ( 1) 00:21:17.753 26.782 - 26.978: 99.5741% ( 5) 00:21:17.753 26.978 - 27.175: 99.5804% ( 1) 00:21:17.753 27.175 - 27.372: 99.5866% ( 1) 00:21:17.753 27.372 - 27.569: 99.6242% ( 6) 00:21:17.753 27.569 - 27.766: 99.6493% ( 4) 00:21:17.753 27.766 - 27.963: 99.6681% ( 3) 00:21:17.753 27.963 - 28.160: 99.6869% ( 3) 00:21:17.753 28.160 - 28.357: 99.6994% ( 2) 00:21:17.753 28.357 - 28.554: 99.7056% ( 1) 00:21:17.753 28.554 - 28.751: 99.7244% ( 3) 00:21:17.753 28.751 - 28.948: 99.7370% ( 2) 00:21:17.753 28.948 - 29.145: 99.7495% ( 2) 00:21:17.753 29.145 - 29.342: 99.7683% ( 3) 00:21:17.753 29.342 - 29.538: 99.7745% ( 1) 00:21:17.753 29.538 - 29.735: 99.7933% ( 3) 00:21:17.753 29.735 - 29.932: 99.8058% ( 2) 00:21:17.753 29.932 - 30.129: 99.8309% ( 4) 00:21:17.753 30.129 - 30.326: 99.8372% ( 1) 00:21:17.753 30.326 - 30.523: 99.8497% ( 2) 00:21:17.753 30.720 - 30.917: 99.8560% ( 1) 00:21:17.753 30.917 - 31.114: 99.8685% ( 2) 00:21:17.753 31.311 - 31.508: 99.8747% ( 1) 00:21:17.753 32.098 - 32.295: 99.8810% ( 1) 00:21:17.753 32.492 - 32.689: 99.8873% ( 1) 00:21:17.753 33.674 - 33.871: 99.8935% ( 1) 00:21:17.753 34.068 - 34.265: 99.8998% ( 1) 00:21:17.753 34.265 - 34.462: 99.9123% ( 2) 00:21:17.753 34.462 - 34.658: 99.9248% ( 2) 00:21:17.753 35.052 - 35.249: 99.9311% ( 1) 00:21:17.753 37.022 - 37.218: 99.9374% ( 1) 00:21:17.753 37.415 - 37.612: 99.9436% ( 1) 00:21:17.753 37.612 - 37.809: 99.9499% ( 1) 00:21:17.753 37.809 - 38.006: 99.9562% ( 1) 00:21:17.753 39.778 - 39.975: 99.9624% ( 1) 00:21:17.753 46.080 - 46.277: 99.9687% ( 1) 00:21:17.753 59.077 - 59.471: 99.9749% ( 1) 00:21:17.753 61.834 - 62.228: 99.9812% ( 1) 00:21:17.753 64.197 - 64.591: 99.9875% ( 1) 00:21:17.753 103.975 - 104.763: 99.9937% ( 1) 00:21:17.753 348.160 - 349.735: 100.0000% ( 1) 00:21:17.753 00:21:17.753 Complete histogram 00:21:17.753 ================== 00:21:17.753 Range in us Cumulative Count 00:21:17.753 7.138 - 7.188: 0.0063% ( 1) 00:21:17.754 7.188 - 7.237: 0.1002% ( 15) 00:21:17.754 7.237 - 7.286: 1.1962% ( 175) 00:21:17.754 7.286 - 7.335: 5.5489% ( 695) 00:21:17.754 7.335 - 7.385: 17.1416% ( 1851) 00:21:17.754 7.385 - 7.434: 32.6611% ( 2478) 00:21:17.754 7.434 - 7.483: 46.5147% ( 2212) 00:21:17.754 7.483 - 7.532: 57.3307% ( 1727) 00:21:17.754 7.532 - 7.582: 64.6270% ( 1165) 00:21:17.754 7.582 - 7.631: 69.0612% ( 708) 00:21:17.754 7.631 - 7.680: 72.3868% ( 531) 00:21:17.754 7.680 - 7.729: 74.3283% ( 310) 00:21:17.754 7.729 - 7.778: 75.6874% ( 217) 00:21:17.754 7.778 - 7.828: 76.4577% ( 123) 00:21:17.754 7.828 - 7.877: 77.0026% ( 87) 00:21:17.754 7.877 - 7.926: 77.5161% ( 82) 00:21:17.754 7.926 - 7.975: 77.8167% ( 48) 00:21:17.754 7.975 - 8.025: 78.1487% ( 53) 00:21:17.754 8.025 - 8.074: 78.4055% ( 41) 00:21:17.754 8.074 - 8.123: 78.6873% ( 45) 00:21:17.754 8.123 - 8.172: 78.8752% ( 30) 00:21:17.754 8.172 - 8.222: 79.0693% ( 31) 00:21:17.754 8.222 - 8.271: 79.2572% ( 30) 00:21:17.754 8.271 - 8.320: 79.4075% ( 24) 00:21:17.754 8.320 - 8.369: 79.5203% ( 18) 00:21:17.754 8.369 - 8.418: 79.5829% ( 10) 00:21:17.754 8.418 - 8.468: 79.6205% ( 6) 00:21:17.754 8.468 - 8.517: 79.6518% ( 5) 00:21:17.754 8.517 - 8.566: 79.6894% ( 6) 00:21:17.754 8.566 - 8.615: 79.7269% ( 6) 00:21:17.754 8.615 - 8.665: 79.7833% ( 9) 00:21:17.754 8.714 - 8.763: 79.8021% ( 3) 00:21:17.754 8.812 - 8.862: 79.8271% ( 4) 00:21:17.754 9.058 - 9.108: 79.8334% ( 1) 00:21:17.754 9.255 - 9.305: 79.8397% ( 1) 00:21:17.754 9.452 - 9.502: 79.8522% ( 2) 00:21:17.754 9.649 - 9.698: 79.8772% ( 4) 00:21:17.754 9.698 - 9.748: 79.8898% ( 2) 00:21:17.754 9.748 - 9.797: 79.9023% ( 2) 00:21:17.754 9.797 - 9.846: 79.9211% ( 3) 00:21:17.754 9.895 - 9.945: 79.9649% ( 7) 00:21:17.754 9.945 - 9.994: 79.9962% ( 5) 00:21:17.754 9.994 - 10.043: 80.0589% ( 10) 00:21:17.754 10.043 - 10.092: 80.1466% ( 14) 00:21:17.754 10.092 - 10.142: 80.3031% ( 25) 00:21:17.754 10.142 - 10.191: 80.4660% ( 26) 00:21:17.754 10.191 - 10.240: 80.6413% ( 28) 00:21:17.754 10.240 - 10.289: 80.9106% ( 43) 00:21:17.754 10.289 - 10.338: 81.0923% ( 29) 00:21:17.754 10.338 - 10.388: 81.3115% ( 35) 00:21:17.754 10.388 - 10.437: 81.6121% ( 48) 00:21:17.754 10.437 - 10.486: 81.9127% ( 48) 00:21:17.754 10.486 - 10.535: 82.2885% ( 60) 00:21:17.754 10.535 - 10.585: 82.6517% ( 58) 00:21:17.754 10.585 - 10.634: 83.0839% ( 69) 00:21:17.754 10.634 - 10.683: 83.6663% ( 93) 00:21:17.754 10.683 - 10.732: 84.3239% ( 105) 00:21:17.754 10.732 - 10.782: 84.9439% ( 99) 00:21:17.754 10.782 - 10.831: 85.7707% ( 132) 00:21:17.754 10.831 - 10.880: 86.6036% ( 133) 00:21:17.754 10.880 - 10.929: 87.4679% ( 138) 00:21:17.754 10.929 - 10.978: 88.2946% ( 132) 00:21:17.754 10.978 - 11.028: 89.0148% ( 115) 00:21:17.754 11.028 - 11.077: 89.9104% ( 143) 00:21:17.754 11.077 - 11.126: 90.7246% ( 130) 00:21:17.754 11.126 - 11.175: 91.4135% ( 110) 00:21:17.754 11.175 - 11.225: 92.2277% ( 130) 00:21:17.754 11.225 - 11.274: 92.8853% ( 105) 00:21:17.754 11.274 - 11.323: 93.4114% ( 84) 00:21:17.754 11.323 - 11.372: 93.9312% ( 83) 00:21:17.754 11.372 - 11.422: 94.4072% ( 76) 00:21:17.754 11.422 - 11.471: 94.7329% ( 52) 00:21:17.754 11.471 - 11.520: 95.1087% ( 60) 00:21:17.754 11.520 - 11.569: 95.4406% ( 53) 00:21:17.754 11.569 - 11.618: 95.7600% ( 51) 00:21:17.754 11.618 - 11.668: 95.9729% ( 34) 00:21:17.754 11.668 - 11.717: 96.1984% ( 36) 00:21:17.754 11.717 - 11.766: 96.3550% ( 25) 00:21:17.754 11.766 - 11.815: 96.5679% ( 34) 00:21:17.754 11.815 - 11.865: 96.7746% ( 33) 00:21:17.754 11.865 - 11.914: 96.9437% ( 27) 00:21:17.754 11.914 - 11.963: 97.1003% ( 25) 00:21:17.754 11.963 - 12.012: 97.1880% ( 14) 00:21:17.754 12.012 - 12.062: 97.2882% ( 16) 00:21:17.754 12.062 - 12.111: 97.3821% ( 15) 00:21:17.754 12.111 - 12.160: 97.4635% ( 13) 00:21:17.754 12.160 - 12.209: 97.5324% ( 11) 00:21:17.754 12.209 - 12.258: 97.6326% ( 16) 00:21:17.754 12.258 - 12.308: 97.6890% ( 9) 00:21:17.754 12.308 - 12.357: 97.7328% ( 7) 00:21:17.754 12.357 - 12.406: 97.7579% ( 4) 00:21:17.754 12.406 - 12.455: 97.8017% ( 7) 00:21:17.754 12.455 - 12.505: 97.8456% ( 7) 00:21:17.754 12.505 - 12.554: 97.8831% ( 6) 00:21:17.754 12.554 - 12.603: 97.9082% ( 4) 00:21:17.754 12.603 - 12.702: 97.9896% ( 13) 00:21:17.754 12.702 - 12.800: 98.0898% ( 16) 00:21:17.754 12.800 - 12.898: 98.1023% ( 2) 00:21:17.754 12.898 - 12.997: 98.1650% ( 10) 00:21:17.754 12.997 - 13.095: 98.1900% ( 4) 00:21:17.754 13.095 - 13.194: 98.2276% ( 6) 00:21:17.754 13.194 - 13.292: 98.2589% ( 5) 00:21:17.754 13.292 - 13.391: 98.3090% ( 8) 00:21:17.754 13.391 - 13.489: 98.3403% ( 5) 00:21:17.754 13.489 - 13.588: 98.4030% ( 10) 00:21:17.754 13.588 - 13.686: 98.4656% ( 10) 00:21:17.754 13.686 - 13.785: 98.5094% ( 7) 00:21:17.754 13.785 - 13.883: 98.5908% ( 13) 00:21:17.754 13.883 - 13.982: 98.6284% ( 6) 00:21:17.754 13.982 - 14.080: 98.7224% ( 15) 00:21:17.754 14.080 - 14.178: 98.7662% ( 7) 00:21:17.754 14.178 - 14.277: 98.8226% ( 9) 00:21:17.754 14.277 - 14.375: 98.8852% ( 10) 00:21:17.754 14.375 - 14.474: 98.9541% ( 11) 00:21:17.754 14.474 - 14.572: 99.0230% ( 11) 00:21:17.754 14.572 - 14.671: 99.0919% ( 11) 00:21:17.754 14.671 - 14.769: 99.1169% ( 4) 00:21:17.754 14.769 - 14.868: 99.1420% ( 4) 00:21:17.754 14.868 - 14.966: 99.1983% ( 9) 00:21:17.754 14.966 - 15.065: 99.2297% ( 5) 00:21:17.754 15.065 - 15.163: 99.2610% ( 5) 00:21:17.754 15.262 - 15.360: 99.2672% ( 1) 00:21:17.754 15.360 - 15.458: 99.2735% ( 1) 00:21:17.754 15.458 - 15.557: 99.2923% ( 3) 00:21:17.754 15.655 - 15.754: 99.2986% ( 1) 00:21:17.754 15.852 - 15.951: 99.3048% ( 1) 00:21:17.754 16.049 - 16.148: 99.3111% ( 1) 00:21:17.754 16.246 - 16.345: 99.3236% ( 2) 00:21:17.754 16.345 - 16.443: 99.3299% ( 1) 00:21:17.754 16.542 - 16.640: 99.3361% ( 1) 00:21:17.754 16.837 - 16.935: 99.3424% ( 1) 00:21:17.754 17.723 - 17.822: 99.3487% ( 1) 00:21:17.754 17.822 - 17.920: 99.3549% ( 1) 00:21:17.754 17.920 - 18.018: 99.3612% ( 1) 00:21:17.754 18.117 - 18.215: 99.3800% ( 3) 00:21:17.754 18.412 - 18.511: 99.3862% ( 1) 00:21:17.754 18.511 - 18.609: 99.3925% ( 1) 00:21:17.754 18.708 - 18.806: 99.3988% ( 1) 00:21:17.754 18.806 - 18.905: 99.4113% ( 2) 00:21:17.754 18.905 - 19.003: 99.4238% ( 2) 00:21:17.754 19.003 - 19.102: 99.4363% ( 2) 00:21:17.754 19.397 - 19.495: 99.4489% ( 2) 00:21:17.754 19.495 - 19.594: 99.4551% ( 1) 00:21:17.754 19.791 - 19.889: 99.4614% ( 1) 00:21:17.754 19.889 - 19.988: 99.4677% ( 1) 00:21:17.754 19.988 - 20.086: 99.4802% ( 2) 00:21:17.754 20.578 - 20.677: 99.4864% ( 1) 00:21:17.754 20.874 - 20.972: 99.4927% ( 1) 00:21:17.754 21.071 - 21.169: 99.5052% ( 2) 00:21:17.754 21.169 - 21.268: 99.5115% ( 1) 00:21:17.754 21.268 - 21.366: 99.5178% ( 1) 00:21:17.754 21.366 - 21.465: 99.5303% ( 2) 00:21:17.754 21.563 - 21.662: 99.5365% ( 1) 00:21:17.754 21.662 - 21.760: 99.5616% ( 4) 00:21:17.754 21.760 - 21.858: 99.5679% ( 1) 00:21:17.754 21.858 - 21.957: 99.5741% ( 1) 00:21:17.754 21.957 - 22.055: 99.5804% ( 1) 00:21:17.754 22.055 - 22.154: 99.5992% ( 3) 00:21:17.754 22.154 - 22.252: 99.6054% ( 1) 00:21:17.754 22.252 - 22.351: 99.6180% ( 2) 00:21:17.754 22.351 - 22.449: 99.6242% ( 1) 00:21:17.754 22.449 - 22.548: 99.6305% ( 1) 00:21:17.754 22.548 - 22.646: 99.6430% ( 2) 00:21:17.754 22.646 - 22.745: 99.6555% ( 2) 00:21:17.754 22.745 - 22.843: 99.6806% ( 4) 00:21:17.754 22.843 - 22.942: 99.6869% ( 1) 00:21:17.754 22.942 - 23.040: 99.7056% ( 3) 00:21:17.754 23.138 - 23.237: 99.7182% ( 2) 00:21:17.754 23.335 - 23.434: 99.7244% ( 1) 00:21:17.754 23.434 - 23.532: 99.7307% ( 1) 00:21:17.754 23.532 - 23.631: 99.7370% ( 1) 00:21:17.754 23.828 - 23.926: 99.7432% ( 1) 00:21:17.754 24.025 - 24.123: 99.7557% ( 2) 00:21:17.754 24.123 - 24.222: 99.7620% ( 1) 00:21:17.754 24.222 - 24.320: 99.7683% ( 1) 00:21:17.754 24.418 - 24.517: 99.7871% ( 3) 00:21:17.754 24.517 - 24.615: 99.7996% ( 2) 00:21:17.754 24.714 - 24.812: 99.8058% ( 1) 00:21:17.754 25.009 - 25.108: 99.8184% ( 2) 00:21:17.754 25.206 - 25.403: 99.8309% ( 2) 00:21:17.754 25.600 - 25.797: 99.8372% ( 1) 00:21:17.754 26.191 - 26.388: 99.8560% ( 3) 00:21:17.754 26.782 - 26.978: 99.8685% ( 2) 00:21:17.754 27.372 - 27.569: 99.8873% ( 3) 00:21:17.754 27.569 - 27.766: 99.9061% ( 3) 00:21:17.754 27.766 - 27.963: 99.9123% ( 1) 00:21:17.754 28.160 - 28.357: 99.9186% ( 1) 00:21:17.754 29.145 - 29.342: 99.9248% ( 1) 00:21:17.754 30.326 - 30.523: 99.9311% ( 1) 00:21:17.754 30.523 - 30.720: 99.9374% ( 1) 00:21:17.754 31.705 - 31.902: 99.9436% ( 1) 00:21:17.754 32.098 - 32.295: 99.9499% ( 1) 00:21:17.754 32.886 - 33.083: 99.9562% ( 1) 00:21:17.755 34.855 - 35.052: 99.9624% ( 1) 00:21:17.755 35.052 - 35.249: 99.9687% ( 1) 00:21:17.755 35.249 - 35.446: 99.9749% ( 1) 00:21:17.755 36.037 - 36.234: 99.9812% ( 1) 00:21:17.755 36.234 - 36.431: 99.9875% ( 1) 00:21:17.755 39.778 - 39.975: 99.9937% ( 1) 00:21:17.755 40.763 - 40.960: 100.0000% ( 1) 00:21:17.755 00:21:17.755 00:21:17.755 real 0m1.207s 00:21:17.755 user 0m1.069s 00:21:17.755 sys 0m0.090s 00:21:17.755 20:19:12 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:17.755 20:19:12 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:21:17.755 ************************************ 00:21:17.755 END TEST nvme_overhead 00:21:17.755 ************************************ 00:21:17.755 20:19:12 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:21:17.755 20:19:12 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:21:17.755 20:19:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:17.755 20:19:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:17.755 ************************************ 00:21:17.755 START TEST nvme_arbitration 00:21:17.755 ************************************ 00:21:17.755 20:19:12 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:21:21.032 Initializing NVMe Controllers 00:21:21.032 Attached to 0000:00:10.0 00:21:21.032 Attached to 0000:00:11.0 00:21:21.032 Attached to 0000:00:13.0 00:21:21.032 Attached to 0000:00:12.0 00:21:21.032 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:21:21.032 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:21:21.032 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:21:21.032 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:21:21.032 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:21:21.032 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:21:21.032 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:21:21.032 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:21:21.032 Initialization complete. Launching workers. 00:21:21.032 Starting thread on core 1 with urgent priority queue 00:21:21.032 Starting thread on core 2 with urgent priority queue 00:21:21.032 Starting thread on core 3 with urgent priority queue 00:21:21.032 Starting thread on core 0 with urgent priority queue 00:21:21.032 QEMU NVMe Ctrl (12340 ) core 0: 981.33 IO/s 101.90 secs/100000 ios 00:21:21.032 QEMU NVMe Ctrl (12342 ) core 0: 981.33 IO/s 101.90 secs/100000 ios 00:21:21.032 QEMU NVMe Ctrl (12341 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:21:21.032 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:21:21.032 QEMU NVMe Ctrl (12343 ) core 2: 960.00 IO/s 104.17 secs/100000 ios 00:21:21.032 QEMU NVMe Ctrl (12342 ) core 3: 981.33 IO/s 101.90 secs/100000 ios 00:21:21.032 ======================================================== 00:21:21.032 00:21:21.032 00:21:21.032 real 0m3.281s 00:21:21.032 user 0m9.172s 00:21:21.032 sys 0m0.118s 00:21:21.032 20:19:15 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:21.032 20:19:15 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:21:21.032 ************************************ 00:21:21.032 END TEST nvme_arbitration 00:21:21.032 ************************************ 00:21:21.032 20:19:15 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:21:21.032 20:19:15 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:21.032 20:19:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:21.032 20:19:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:21.032 ************************************ 00:21:21.032 START TEST nvme_single_aen 00:21:21.032 ************************************ 00:21:21.032 20:19:15 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:21:21.032 Asynchronous Event Request test 00:21:21.032 Attached to 0000:00:10.0 00:21:21.032 Attached to 0000:00:11.0 00:21:21.032 Attached to 0000:00:13.0 00:21:21.032 Attached to 0000:00:12.0 00:21:21.032 Reset controller to setup AER completions for this process 00:21:21.032 Registering asynchronous event callbacks... 00:21:21.032 Getting orig temperature thresholds of all controllers 00:21:21.032 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:21.032 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:21.032 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:21.032 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:21.032 Setting all controllers temperature threshold low to trigger AER 00:21:21.032 Waiting for all controllers temperature threshold to be set lower 00:21:21.032 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:21.032 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:21:21.032 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:21.032 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:21:21.032 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:21.032 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:21:21.032 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:21.032 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:21:21.032 Waiting for all controllers to trigger AER and reset threshold 00:21:21.032 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:21.032 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:21.032 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:21.032 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:21.032 Cleaning up... 00:21:21.032 ************************************ 00:21:21.032 END TEST nvme_single_aen 00:21:21.032 ************************************ 00:21:21.032 00:21:21.032 real 0m0.220s 00:21:21.032 user 0m0.066s 00:21:21.032 sys 0m0.108s 00:21:21.032 20:19:16 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:21.032 20:19:16 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:21:21.032 20:19:16 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:21:21.032 20:19:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:21.032 20:19:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:21.032 20:19:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:21.032 ************************************ 00:21:21.032 START TEST nvme_doorbell_aers 00:21:21.032 ************************************ 00:21:21.032 20:19:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:21:21.032 20:19:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:21:21.032 20:19:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:21:21.032 20:19:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:21:21.032 20:19:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:21:21.032 20:19:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:21:21.032 20:19:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:21:21.032 20:19:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:21.032 20:19:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:21.032 20:19:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:21:21.032 20:19:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:21:21.033 20:19:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:21:21.033 20:19:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:21:21.033 20:19:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:21.290 [2024-10-01 20:19:16.376684] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:21:31.252 Executing: test_write_invalid_db 00:21:31.252 Waiting for AER completion... 00:21:31.252 Failure: test_write_invalid_db 00:21:31.252 00:21:31.252 Executing: test_invalid_db_write_overflow_sq 00:21:31.252 Waiting for AER completion... 00:21:31.252 Failure: test_invalid_db_write_overflow_sq 00:21:31.252 00:21:31.252 Executing: test_invalid_db_write_overflow_cq 00:21:31.252 Waiting for AER completion... 00:21:31.252 Failure: test_invalid_db_write_overflow_cq 00:21:31.252 00:21:31.252 20:19:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:21:31.252 20:19:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:21:31.252 [2024-10-01 20:19:26.412655] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:21:41.215 Executing: test_write_invalid_db 00:21:41.215 Waiting for AER completion... 00:21:41.215 Failure: test_write_invalid_db 00:21:41.215 00:21:41.215 Executing: test_invalid_db_write_overflow_sq 00:21:41.215 Waiting for AER completion... 00:21:41.215 Failure: test_invalid_db_write_overflow_sq 00:21:41.215 00:21:41.215 Executing: test_invalid_db_write_overflow_cq 00:21:41.215 Waiting for AER completion... 00:21:41.215 Failure: test_invalid_db_write_overflow_cq 00:21:41.215 00:21:41.215 20:19:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:21:41.215 20:19:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:21:41.472 [2024-10-01 20:19:36.464522] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:21:51.507 Executing: test_write_invalid_db 00:21:51.507 Waiting for AER completion... 00:21:51.507 Failure: test_write_invalid_db 00:21:51.507 00:21:51.507 Executing: test_invalid_db_write_overflow_sq 00:21:51.507 Waiting for AER completion... 00:21:51.507 Failure: test_invalid_db_write_overflow_sq 00:21:51.507 00:21:51.507 Executing: test_invalid_db_write_overflow_cq 00:21:51.507 Waiting for AER completion... 00:21:51.507 Failure: test_invalid_db_write_overflow_cq 00:21:51.507 00:21:51.507 20:19:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:21:51.507 20:19:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:21:51.508 [2024-10-01 20:19:46.495067] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:22:01.463 Executing: test_write_invalid_db 00:22:01.463 Waiting for AER completion... 00:22:01.463 Failure: test_write_invalid_db 00:22:01.463 00:22:01.463 Executing: test_invalid_db_write_overflow_sq 00:22:01.463 Waiting for AER completion... 00:22:01.463 Failure: test_invalid_db_write_overflow_sq 00:22:01.463 00:22:01.463 Executing: test_invalid_db_write_overflow_cq 00:22:01.463 Waiting for AER completion... 00:22:01.463 Failure: test_invalid_db_write_overflow_cq 00:22:01.463 00:22:01.463 00:22:01.463 real 0m40.187s 00:22:01.463 user 0m34.098s 00:22:01.463 sys 0m5.688s 00:22:01.463 20:19:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:01.463 20:19:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:22:01.463 ************************************ 00:22:01.463 END TEST nvme_doorbell_aers 00:22:01.463 ************************************ 00:22:01.463 20:19:56 nvme -- nvme/nvme.sh@97 -- # uname 00:22:01.463 20:19:56 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:22:01.463 20:19:56 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:22:01.463 20:19:56 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:22:01.463 20:19:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:01.463 20:19:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:01.463 ************************************ 00:22:01.463 START TEST nvme_multi_aen 00:22:01.463 ************************************ 00:22:01.463 20:19:56 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:22:01.463 [2024-10-01 20:19:56.512969] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:22:01.463 [2024-10-01 20:19:56.513212] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:22:01.464 [2024-10-01 20:19:56.513277] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:22:01.464 [2024-10-01 20:19:56.514587] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:22:01.464 [2024-10-01 20:19:56.514704] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:22:01.464 [2024-10-01 20:19:56.514716] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:22:01.464 [2024-10-01 20:19:56.515718] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:22:01.464 [2024-10-01 20:19:56.515743] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:22:01.464 [2024-10-01 20:19:56.515750] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:22:01.464 [2024-10-01 20:19:56.516656] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:22:01.464 [2024-10-01 20:19:56.516676] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:22:01.464 [2024-10-01 20:19:56.516683] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63896) is not found. Dropping the request. 00:22:01.464 Child process pid: 64423 00:22:01.721 [Child] Asynchronous Event Request test 00:22:01.721 [Child] Attached to 0000:00:10.0 00:22:01.721 [Child] Attached to 0000:00:11.0 00:22:01.721 [Child] Attached to 0000:00:13.0 00:22:01.721 [Child] Attached to 0000:00:12.0 00:22:01.721 [Child] Registering asynchronous event callbacks... 00:22:01.721 [Child] Getting orig temperature thresholds of all controllers 00:22:01.721 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:01.721 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:01.721 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:01.721 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:01.721 [Child] Waiting for all controllers to trigger AER and reset threshold 00:22:01.721 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:01.721 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:01.721 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:01.721 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:01.721 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:01.721 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:01.721 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:01.721 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:01.721 [Child] Cleaning up... 00:22:01.721 Asynchronous Event Request test 00:22:01.721 Attached to 0000:00:10.0 00:22:01.721 Attached to 0000:00:11.0 00:22:01.721 Attached to 0000:00:13.0 00:22:01.721 Attached to 0000:00:12.0 00:22:01.721 Reset controller to setup AER completions for this process 00:22:01.721 Registering asynchronous event callbacks... 00:22:01.721 Getting orig temperature thresholds of all controllers 00:22:01.721 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:01.721 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:01.721 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:01.721 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:01.721 Setting all controllers temperature threshold low to trigger AER 00:22:01.721 Waiting for all controllers temperature threshold to be set lower 00:22:01.721 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:01.721 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:22:01.721 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:01.721 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:22:01.721 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:01.721 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:22:01.721 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:01.721 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:22:01.721 Waiting for all controllers to trigger AER and reset threshold 00:22:01.721 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:01.721 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:01.721 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:01.721 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:01.721 Cleaning up... 00:22:01.721 00:22:01.721 real 0m0.417s 00:22:01.721 user 0m0.137s 00:22:01.721 sys 0m0.174s 00:22:01.721 20:19:56 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:01.721 20:19:56 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:22:01.721 ************************************ 00:22:01.721 END TEST nvme_multi_aen 00:22:01.721 ************************************ 00:22:01.721 20:19:56 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:22:01.721 20:19:56 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:01.721 20:19:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:01.721 20:19:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:01.721 ************************************ 00:22:01.721 START TEST nvme_startup 00:22:01.721 ************************************ 00:22:01.721 20:19:56 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:22:01.978 Initializing NVMe Controllers 00:22:01.978 Attached to 0000:00:10.0 00:22:01.978 Attached to 0000:00:11.0 00:22:01.978 Attached to 0000:00:13.0 00:22:01.978 Attached to 0000:00:12.0 00:22:01.978 Initialization complete. 00:22:01.978 Time used:138732.500 (us). 00:22:01.978 ************************************ 00:22:01.978 END TEST nvme_startup 00:22:01.978 ************************************ 00:22:01.978 00:22:01.978 real 0m0.201s 00:22:01.978 user 0m0.061s 00:22:01.978 sys 0m0.099s 00:22:01.978 20:19:57 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:01.978 20:19:57 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:22:01.978 20:19:57 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:22:01.978 20:19:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:01.978 20:19:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:01.978 20:19:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:01.978 ************************************ 00:22:01.978 START TEST nvme_multi_secondary 00:22:01.978 ************************************ 00:22:01.978 20:19:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:22:01.978 20:19:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64468 00:22:01.978 20:19:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:22:01.978 20:19:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64469 00:22:01.978 20:19:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:22:01.978 20:19:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:22:05.252 Initializing NVMe Controllers 00:22:05.252 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:05.252 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:05.252 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:05.252 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:05.252 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:22:05.252 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:22:05.252 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:22:05.252 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:22:05.252 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:22:05.252 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:22:05.252 Initialization complete. Launching workers. 00:22:05.252 ======================================================== 00:22:05.252 Latency(us) 00:22:05.252 Device Information : IOPS MiB/s Average min max 00:22:05.252 PCIE (0000:00:10.0) NSID 1 from core 2: 3389.17 13.24 4719.20 832.96 13885.24 00:22:05.252 PCIE (0000:00:11.0) NSID 1 from core 2: 3389.17 13.24 4715.46 843.80 14365.31 00:22:05.252 PCIE (0000:00:13.0) NSID 1 from core 2: 3389.17 13.24 4714.21 835.75 13947.73 00:22:05.252 PCIE (0000:00:12.0) NSID 1 from core 2: 3389.17 13.24 4714.17 825.73 12473.55 00:22:05.252 PCIE (0000:00:12.0) NSID 2 from core 2: 3389.17 13.24 4713.92 816.73 18277.99 00:22:05.252 PCIE (0000:00:12.0) NSID 3 from core 2: 3389.17 13.24 4714.24 831.66 13504.77 00:22:05.252 ======================================================== 00:22:05.252 Total : 20335.01 79.43 4715.20 816.73 18277.99 00:22:05.252 00:22:05.252 20:20:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64468 00:22:05.509 Initializing NVMe Controllers 00:22:05.509 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:05.509 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:05.509 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:05.509 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:05.509 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:22:05.509 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:22:05.509 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:22:05.509 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:22:05.509 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:22:05.509 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:22:05.509 Initialization complete. Launching workers. 00:22:05.509 ======================================================== 00:22:05.509 Latency(us) 00:22:05.509 Device Information : IOPS MiB/s Average min max 00:22:05.509 PCIE (0000:00:10.0) NSID 1 from core 1: 7795.48 30.45 2050.91 936.08 5452.13 00:22:05.509 PCIE (0000:00:11.0) NSID 1 from core 1: 7795.48 30.45 2051.82 998.07 5369.59 00:22:05.509 PCIE (0000:00:13.0) NSID 1 from core 1: 7795.48 30.45 2051.67 991.89 5467.24 00:22:05.509 PCIE (0000:00:12.0) NSID 1 from core 1: 7795.48 30.45 2051.55 980.47 5543.40 00:22:05.509 PCIE (0000:00:12.0) NSID 2 from core 1: 7795.48 30.45 2051.50 996.57 5845.67 00:22:05.509 PCIE (0000:00:12.0) NSID 3 from core 1: 7795.48 30.45 2051.32 979.82 5265.98 00:22:05.509 ======================================================== 00:22:05.509 Total : 46772.90 182.71 2051.46 936.08 5845.67 00:22:05.509 00:22:07.403 Initializing NVMe Controllers 00:22:07.403 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:07.403 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:07.403 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:07.403 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:07.403 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:07.403 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:22:07.403 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:22:07.403 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:22:07.403 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:22:07.403 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:22:07.403 Initialization complete. Launching workers. 00:22:07.403 ======================================================== 00:22:07.403 Latency(us) 00:22:07.403 Device Information : IOPS MiB/s Average min max 00:22:07.403 PCIE (0000:00:10.0) NSID 1 from core 0: 10396.19 40.61 1537.69 640.12 5663.52 00:22:07.403 PCIE (0000:00:11.0) NSID 1 from core 0: 10396.19 40.61 1538.60 656.43 5539.85 00:22:07.403 PCIE (0000:00:13.0) NSID 1 from core 0: 10396.19 40.61 1538.58 656.91 5477.36 00:22:07.403 PCIE (0000:00:12.0) NSID 1 from core 0: 10396.19 40.61 1538.55 646.81 5507.10 00:22:07.403 PCIE (0000:00:12.0) NSID 2 from core 0: 10396.19 40.61 1538.53 605.20 5452.74 00:22:07.403 PCIE (0000:00:12.0) NSID 3 from core 0: 10396.19 40.61 1538.50 557.80 5319.07 00:22:07.403 ======================================================== 00:22:07.403 Total : 62377.13 243.66 1538.41 557.80 5663.52 00:22:07.403 00:22:07.403 20:20:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64469 00:22:07.403 20:20:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64544 00:22:07.403 20:20:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:22:07.403 20:20:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64545 00:22:07.403 20:20:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:22:07.403 20:20:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:22:10.683 Initializing NVMe Controllers 00:22:10.683 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:10.683 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:10.683 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:10.683 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:10.683 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:10.683 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:22:10.683 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:22:10.683 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:22:10.683 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:22:10.683 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:22:10.683 Initialization complete. Launching workers. 00:22:10.683 ======================================================== 00:22:10.684 Latency(us) 00:22:10.684 Device Information : IOPS MiB/s Average min max 00:22:10.684 PCIE (0000:00:10.0) NSID 1 from core 0: 7888.62 30.81 2026.79 756.34 6206.85 00:22:10.684 PCIE (0000:00:11.0) NSID 1 from core 0: 7888.62 30.81 2027.78 777.54 6104.74 00:22:10.684 PCIE (0000:00:13.0) NSID 1 from core 0: 7888.62 30.81 2027.80 760.02 6085.99 00:22:10.684 PCIE (0000:00:12.0) NSID 1 from core 0: 7888.62 30.81 2027.83 763.02 6482.86 00:22:10.684 PCIE (0000:00:12.0) NSID 2 from core 0: 7888.62 30.81 2027.85 772.31 6107.59 00:22:10.684 PCIE (0000:00:12.0) NSID 3 from core 0: 7888.62 30.81 2027.88 771.08 6188.68 00:22:10.684 ======================================================== 00:22:10.684 Total : 47331.70 184.89 2027.66 756.34 6482.86 00:22:10.684 00:22:10.684 Initializing NVMe Controllers 00:22:10.684 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:10.684 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:10.684 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:10.684 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:10.684 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:22:10.684 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:22:10.684 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:22:10.684 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:22:10.684 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:22:10.684 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:22:10.684 Initialization complete. Launching workers. 00:22:10.684 ======================================================== 00:22:10.684 Latency(us) 00:22:10.684 Device Information : IOPS MiB/s Average min max 00:22:10.684 PCIE (0000:00:10.0) NSID 1 from core 1: 7673.14 29.97 2083.73 721.19 6393.38 00:22:10.684 PCIE (0000:00:11.0) NSID 1 from core 1: 7673.14 29.97 2084.66 737.59 5619.31 00:22:10.684 PCIE (0000:00:13.0) NSID 1 from core 1: 7673.14 29.97 2084.60 744.50 6035.99 00:22:10.684 PCIE (0000:00:12.0) NSID 1 from core 1: 7673.14 29.97 2084.55 714.24 5595.79 00:22:10.684 PCIE (0000:00:12.0) NSID 2 from core 1: 7673.14 29.97 2084.49 663.66 6018.55 00:22:10.684 PCIE (0000:00:12.0) NSID 3 from core 1: 7673.14 29.97 2084.43 646.78 5918.33 00:22:10.684 ======================================================== 00:22:10.684 Total : 46038.82 179.84 2084.41 646.78 6393.38 00:22:10.684 00:22:13.211 Initializing NVMe Controllers 00:22:13.211 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:13.211 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:13.211 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:13.211 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:13.211 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:22:13.211 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:22:13.211 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:22:13.211 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:22:13.211 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:22:13.211 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:22:13.211 Initialization complete. Launching workers. 00:22:13.211 ======================================================== 00:22:13.211 Latency(us) 00:22:13.211 Device Information : IOPS MiB/s Average min max 00:22:13.211 PCIE (0000:00:10.0) NSID 1 from core 2: 4594.79 17.95 3480.44 733.37 15060.10 00:22:13.211 PCIE (0000:00:11.0) NSID 1 from core 2: 4594.79 17.95 3481.80 760.50 14450.92 00:22:13.211 PCIE (0000:00:13.0) NSID 1 from core 2: 4594.79 17.95 3481.75 763.84 12978.89 00:22:13.211 PCIE (0000:00:12.0) NSID 1 from core 2: 4594.79 17.95 3481.71 752.41 13090.66 00:22:13.211 PCIE (0000:00:12.0) NSID 2 from core 2: 4594.79 17.95 3481.49 762.29 12978.51 00:22:13.211 PCIE (0000:00:12.0) NSID 3 from core 2: 4594.79 17.95 3481.44 772.25 14552.03 00:22:13.211 ======================================================== 00:22:13.211 Total : 27568.72 107.69 3481.44 733.37 15060.10 00:22:13.211 00:22:13.211 20:20:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64544 00:22:13.211 20:20:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64545 00:22:13.211 00:22:13.211 real 0m10.812s 00:22:13.211 user 0m18.348s 00:22:13.211 sys 0m0.689s 00:22:13.211 20:20:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:13.211 20:20:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:22:13.211 ************************************ 00:22:13.211 END TEST nvme_multi_secondary 00:22:13.211 ************************************ 00:22:13.211 20:20:07 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:22:13.211 20:20:07 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:22:13.211 20:20:07 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/63493 ]] 00:22:13.211 20:20:07 nvme -- common/autotest_common.sh@1090 -- # kill 63493 00:22:13.211 20:20:07 nvme -- common/autotest_common.sh@1091 -- # wait 63493 00:22:13.211 [2024-10-01 20:20:07.896061] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.211 [2024-10-01 20:20:07.896122] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.211 [2024-10-01 20:20:07.896146] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.211 [2024-10-01 20:20:07.896160] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.212 [2024-10-01 20:20:07.898810] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.212 [2024-10-01 20:20:07.898922] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.212 [2024-10-01 20:20:07.898957] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.212 [2024-10-01 20:20:07.898986] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.212 [2024-10-01 20:20:07.902801] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.212 [2024-10-01 20:20:07.902887] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.212 [2024-10-01 20:20:07.902915] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.212 [2024-10-01 20:20:07.902944] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.212 [2024-10-01 20:20:07.906433] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.212 [2024-10-01 20:20:07.906476] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.212 [2024-10-01 20:20:07.906486] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.212 [2024-10-01 20:20:07.906497] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64422) is not found. Dropping the request. 00:22:13.212 20:20:08 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:22:13.212 20:20:08 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:22:13.212 20:20:08 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:22:13.212 20:20:08 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:13.212 20:20:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:13.212 20:20:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:13.212 ************************************ 00:22:13.212 START TEST bdev_nvme_reset_stuck_adm_cmd 00:22:13.212 ************************************ 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:22:13.212 * Looking for test storage... 00:22:13.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:13.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.212 --rc genhtml_branch_coverage=1 00:22:13.212 --rc genhtml_function_coverage=1 00:22:13.212 --rc genhtml_legend=1 00:22:13.212 --rc geninfo_all_blocks=1 00:22:13.212 --rc geninfo_unexecuted_blocks=1 00:22:13.212 00:22:13.212 ' 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:13.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.212 --rc genhtml_branch_coverage=1 00:22:13.212 --rc genhtml_function_coverage=1 00:22:13.212 --rc genhtml_legend=1 00:22:13.212 --rc geninfo_all_blocks=1 00:22:13.212 --rc geninfo_unexecuted_blocks=1 00:22:13.212 00:22:13.212 ' 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:13.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.212 --rc genhtml_branch_coverage=1 00:22:13.212 --rc genhtml_function_coverage=1 00:22:13.212 --rc genhtml_legend=1 00:22:13.212 --rc geninfo_all_blocks=1 00:22:13.212 --rc geninfo_unexecuted_blocks=1 00:22:13.212 00:22:13.212 ' 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:13.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.212 --rc genhtml_branch_coverage=1 00:22:13.212 --rc genhtml_function_coverage=1 00:22:13.212 --rc genhtml_legend=1 00:22:13.212 --rc geninfo_all_blocks=1 00:22:13.212 --rc geninfo_unexecuted_blocks=1 00:22:13.212 00:22:13.212 ' 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:22:13.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64706 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64706 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 64706 ']' 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:22:13.212 20:20:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:22:13.212 [2024-10-01 20:20:08.371403] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:22:13.212 [2024-10-01 20:20:08.371659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64706 ] 00:22:13.470 [2024-10-01 20:20:08.524329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:13.728 [2024-10-01 20:20:08.718357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.728 [2024-10-01 20:20:08.718466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.728 [2024-10-01 20:20:08.718558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.728 [2024-10-01 20:20:08.718611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:22:14.661 nvme0n1 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_bAxFl.txt 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:22:14.661 true 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1727814009 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64735 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:22:14.661 20:20:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:22:16.560 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:22:16.561 [2024-10-01 20:20:11.613308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:22:16.561 [2024-10-01 20:20:11.613585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:16.561 [2024-10-01 20:20:11.613609] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:22:16.561 [2024-10-01 20:20:11.613623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.561 [2024-10-01 20:20:11.615239] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64735 00:22:16.561 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64735 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64735 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_bAxFl.txt 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_bAxFl.txt 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64706 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 64706 ']' 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 64706 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64706 00:22:16.561 killing process with pid 64706 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64706' 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 64706 00:22:16.561 20:20:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 64706 00:22:19.092 20:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:22:19.092 20:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:22:19.092 00:22:19.092 real 0m5.710s 00:22:19.092 user 0m20.031s 00:22:19.092 sys 0m0.582s 00:22:19.092 20:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:19.092 ************************************ 00:22:19.092 END TEST bdev_nvme_reset_stuck_adm_cmd 00:22:19.092 ************************************ 00:22:19.092 20:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:22:19.092 20:20:13 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:22:19.092 20:20:13 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:22:19.092 20:20:13 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:19.092 20:20:13 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:19.092 20:20:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:19.092 ************************************ 00:22:19.092 START TEST nvme_fio 00:22:19.092 ************************************ 00:22:19.092 20:20:13 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:22:19.092 20:20:13 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:22:19.092 20:20:13 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:22:19.092 20:20:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:22:19.092 20:20:13 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:22:19.092 20:20:13 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:22:19.092 20:20:13 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:19.092 20:20:13 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:22:19.092 20:20:13 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:19.092 20:20:13 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:22:19.093 20:20:13 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:22:19.093 20:20:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:22:19.093 20:20:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:22:19.093 20:20:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:22:19.093 20:20:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:19.093 20:20:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:22:19.093 20:20:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:19.093 20:20:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:22:19.351 20:20:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:22:19.351 20:20:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:19.351 20:20:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:22:19.351 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:19.351 fio-3.35 00:22:19.351 Starting 1 thread 00:22:27.456 00:22:27.456 test: (groupid=0, jobs=1): err= 0: pid=64884: Tue Oct 1 20:20:22 2024 00:22:27.456 read: IOPS=23.2k, BW=90.7MiB/s (95.2MB/s)(182MiB/2001msec) 00:22:27.456 slat (nsec): min=3387, max=66232, avg=5066.31, stdev=2013.03 00:22:27.457 clat (usec): min=232, max=7725, avg=2749.22, stdev=731.23 00:22:27.457 lat (usec): min=236, max=7763, avg=2754.28, stdev=732.43 00:22:27.457 clat percentiles (usec): 00:22:27.457 | 1.00th=[ 1958], 5.00th=[ 2311], 10.00th=[ 2343], 20.00th=[ 2376], 00:22:27.457 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2540], 00:22:27.457 | 70.00th=[ 2606], 80.00th=[ 2835], 90.00th=[ 3720], 95.00th=[ 4424], 00:22:27.457 | 99.00th=[ 5800], 99.50th=[ 6128], 99.90th=[ 6783], 99.95th=[ 6849], 00:22:27.457 | 99.99th=[ 7504] 00:22:27.457 bw ( KiB/s): min=89920, max=95024, per=98.99%, avg=91992.00, stdev=2684.01, samples=3 00:22:27.457 iops : min=22480, max=23756, avg=22998.00, stdev=671.00, samples=3 00:22:27.457 write: IOPS=23.1k, BW=90.2MiB/s (94.6MB/s)(180MiB/2001msec); 0 zone resets 00:22:27.457 slat (nsec): min=3502, max=80003, avg=5348.91, stdev=2178.76 00:22:27.457 clat (usec): min=204, max=7588, avg=2756.35, stdev=736.14 00:22:27.457 lat (usec): min=208, max=7598, avg=2761.70, stdev=737.35 00:22:27.457 clat percentiles (usec): 00:22:27.457 | 1.00th=[ 1958], 5.00th=[ 2311], 10.00th=[ 2343], 20.00th=[ 2376], 00:22:27.457 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2540], 00:22:27.457 | 70.00th=[ 2638], 80.00th=[ 2835], 90.00th=[ 3752], 95.00th=[ 4490], 00:22:27.457 | 99.00th=[ 5800], 99.50th=[ 6063], 99.90th=[ 6718], 99.95th=[ 6849], 00:22:27.457 | 99.99th=[ 7308] 00:22:27.457 bw ( KiB/s): min=89480, max=96256, per=99.70%, avg=92077.33, stdev=3654.31, samples=3 00:22:27.457 iops : min=22370, max=24064, avg=23019.33, stdev=913.58, samples=3 00:22:27.457 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:22:27.457 lat (msec) : 2=1.10%, 4=91.58%, 10=7.27% 00:22:27.457 cpu : usr=99.10%, sys=0.10%, ctx=5, majf=0, minf=607 00:22:27.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:27.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:27.457 issued rwts: total=46487,46198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:27.457 00:22:27.457 Run status group 0 (all jobs): 00:22:27.457 READ: bw=90.7MiB/s (95.2MB/s), 90.7MiB/s-90.7MiB/s (95.2MB/s-95.2MB/s), io=182MiB (190MB), run=2001-2001msec 00:22:27.457 WRITE: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=180MiB (189MB), run=2001-2001msec 00:22:27.715 ----------------------------------------------------- 00:22:27.715 Suppressions used: 00:22:27.715 count bytes template 00:22:27.715 1 32 /usr/src/fio/parse.c 00:22:27.715 1 8 libtcmalloc_minimal.so 00:22:27.715 ----------------------------------------------------- 00:22:27.715 00:22:27.715 20:20:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:22:27.715 20:20:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:22:27.715 20:20:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:22:27.715 20:20:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:22:27.974 20:20:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:22:27.974 20:20:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:22:28.231 20:20:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:22:28.231 20:20:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:22:28.231 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:22:28.231 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:28.232 20:20:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:22:28.232 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:28.232 fio-3.35 00:22:28.232 Starting 1 thread 00:22:38.284 00:22:38.284 test: (groupid=0, jobs=1): err= 0: pid=64939: Tue Oct 1 20:20:32 2024 00:22:38.284 read: IOPS=23.5k, BW=91.8MiB/s (96.3MB/s)(184MiB/2001msec) 00:22:38.284 slat (usec): min=3, max=433, avg= 5.06, stdev= 2.80 00:22:38.284 clat (usec): min=205, max=12330, avg=2717.12, stdev=734.64 00:22:38.284 lat (usec): min=209, max=12392, avg=2722.18, stdev=735.81 00:22:38.284 clat percentiles (usec): 00:22:38.284 | 1.00th=[ 1663], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2376], 00:22:38.284 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:22:38.284 | 70.00th=[ 2606], 80.00th=[ 2769], 90.00th=[ 3720], 95.00th=[ 4293], 00:22:38.284 | 99.00th=[ 5604], 99.50th=[ 6325], 99.90th=[ 7373], 99.95th=[ 9241], 00:22:38.284 | 99.99th=[11994] 00:22:38.284 bw ( KiB/s): min=79712, max=98816, per=97.49%, avg=91690.67, stdev=10435.84, samples=3 00:22:38.284 iops : min=19928, max=24704, avg=22922.67, stdev=2608.96, samples=3 00:22:38.284 write: IOPS=23.3k, BW=91.2MiB/s (95.6MB/s)(183MiB/2001msec); 0 zone resets 00:22:38.284 slat (usec): min=3, max=100, avg= 5.32, stdev= 1.94 00:22:38.284 clat (usec): min=286, max=12042, avg=2723.90, stdev=741.41 00:22:38.284 lat (usec): min=291, max=12062, avg=2729.22, stdev=742.55 00:22:38.284 clat percentiles (usec): 00:22:38.284 | 1.00th=[ 1663], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2376], 00:22:38.284 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:22:38.284 | 70.00th=[ 2606], 80.00th=[ 2769], 90.00th=[ 3720], 95.00th=[ 4293], 00:22:38.284 | 99.00th=[ 5735], 99.50th=[ 6325], 99.90th=[ 7504], 99.95th=[ 9634], 00:22:38.284 | 99.99th=[11731] 00:22:38.284 bw ( KiB/s): min=79616, max=99624, per=98.27%, avg=91789.33, stdev=10686.35, samples=3 00:22:38.284 iops : min=19904, max=24906, avg=22947.33, stdev=2671.59, samples=3 00:22:38.284 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.07% 00:22:38.284 lat (msec) : 2=2.21%, 4=90.15%, 10=7.49%, 20=0.04% 00:22:38.284 cpu : usr=99.15%, sys=0.05%, ctx=4, majf=0, minf=608 00:22:38.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:38.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:38.284 issued rwts: total=47047,46724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:38.284 00:22:38.284 Run status group 0 (all jobs): 00:22:38.284 READ: bw=91.8MiB/s (96.3MB/s), 91.8MiB/s-91.8MiB/s (96.3MB/s-96.3MB/s), io=184MiB (193MB), run=2001-2001msec 00:22:38.284 WRITE: bw=91.2MiB/s (95.6MB/s), 91.2MiB/s-91.2MiB/s (95.6MB/s-95.6MB/s), io=183MiB (191MB), run=2001-2001msec 00:22:38.284 ----------------------------------------------------- 00:22:38.284 Suppressions used: 00:22:38.284 count bytes template 00:22:38.284 1 32 /usr/src/fio/parse.c 00:22:38.284 1 8 libtcmalloc_minimal.so 00:22:38.284 ----------------------------------------------------- 00:22:38.284 00:22:38.284 20:20:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:22:38.284 20:20:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:22:38.284 20:20:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:22:38.284 20:20:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:22:38.284 20:20:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:22:38.284 20:20:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:22:38.284 20:20:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:22:38.284 20:20:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:38.284 20:20:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:22:38.284 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:38.284 fio-3.35 00:22:38.284 Starting 1 thread 00:22:44.848 00:22:44.848 test: (groupid=0, jobs=1): err= 0: pid=65000: Tue Oct 1 20:20:39 2024 00:22:44.848 read: IOPS=23.4k, BW=91.5MiB/s (95.9MB/s)(183MiB/2001msec) 00:22:44.848 slat (nsec): min=3385, max=81240, avg=5036.56, stdev=2161.97 00:22:44.848 clat (usec): min=782, max=9257, avg=2725.38, stdev=777.20 00:22:44.848 lat (usec): min=791, max=9327, avg=2730.42, stdev=778.39 00:22:44.848 clat percentiles (usec): 00:22:44.848 | 1.00th=[ 1795], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2376], 00:22:44.848 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:22:44.848 | 70.00th=[ 2606], 80.00th=[ 2769], 90.00th=[ 3294], 95.00th=[ 4621], 00:22:44.848 | 99.00th=[ 6128], 99.50th=[ 6521], 99.90th=[ 7439], 99.95th=[ 8094], 00:22:44.848 | 99.99th=[ 9110] 00:22:44.848 bw ( KiB/s): min=91088, max=94059, per=99.15%, avg=92881.00, stdev=1578.09, samples=3 00:22:44.848 iops : min=22772, max=23514, avg=23220.00, stdev=394.24, samples=3 00:22:44.848 write: IOPS=23.3k, BW=90.9MiB/s (95.3MB/s)(182MiB/2001msec); 0 zone resets 00:22:44.848 slat (nsec): min=3489, max=77308, avg=5309.67, stdev=2177.94 00:22:44.848 clat (usec): min=452, max=9188, avg=2735.10, stdev=792.75 00:22:44.848 lat (usec): min=462, max=9202, avg=2740.41, stdev=793.95 00:22:44.848 clat percentiles (usec): 00:22:44.848 | 1.00th=[ 1778], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2376], 00:22:44.848 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:22:44.848 | 70.00th=[ 2606], 80.00th=[ 2769], 90.00th=[ 3326], 95.00th=[ 4686], 00:22:44.848 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 7504], 99.95th=[ 8225], 00:22:44.848 | 99.99th=[ 8979] 00:22:44.848 bw ( KiB/s): min=90000, max=95128, per=99.91%, avg=93009.33, stdev=2677.51, samples=3 00:22:44.848 iops : min=22500, max=23782, avg=23252.33, stdev=669.38, samples=3 00:22:44.848 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.04% 00:22:44.848 lat (msec) : 2=1.88%, 4=90.75%, 10=7.32% 00:22:44.848 cpu : usr=99.10%, sys=0.10%, ctx=6, majf=0, minf=607 00:22:44.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:44.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:44.848 issued rwts: total=46862,46570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:44.848 00:22:44.848 Run status group 0 (all jobs): 00:22:44.848 READ: bw=91.5MiB/s (95.9MB/s), 91.5MiB/s-91.5MiB/s (95.9MB/s-95.9MB/s), io=183MiB (192MB), run=2001-2001msec 00:22:44.848 WRITE: bw=90.9MiB/s (95.3MB/s), 90.9MiB/s-90.9MiB/s (95.3MB/s-95.3MB/s), io=182MiB (191MB), run=2001-2001msec 00:22:45.108 ----------------------------------------------------- 00:22:45.108 Suppressions used: 00:22:45.108 count bytes template 00:22:45.108 1 32 /usr/src/fio/parse.c 00:22:45.108 1 8 libtcmalloc_minimal.so 00:22:45.108 ----------------------------------------------------- 00:22:45.108 00:22:45.108 20:20:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:22:45.108 20:20:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:22:45.108 20:20:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:22:45.108 20:20:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:22:45.369 20:20:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:22:45.369 20:20:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:22:45.630 20:20:40 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:22:45.630 20:20:40 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:45.630 20:20:40 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:22:45.630 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:45.630 fio-3.35 00:22:45.630 Starting 1 thread 00:23:07.642 00:23:07.642 test: (groupid=0, jobs=1): err= 0: pid=65062: Tue Oct 1 20:21:01 2024 00:23:07.642 read: IOPS=19.5k, BW=76.1MiB/s (79.8MB/s)(152MiB/2001msec) 00:23:07.642 slat (nsec): min=3345, max=80349, avg=5273.10, stdev=2268.62 00:23:07.642 clat (usec): min=259, max=99675, avg=3125.37, stdev=3264.22 00:23:07.642 lat (usec): min=264, max=99679, avg=3130.64, stdev=3264.54 00:23:07.642 clat percentiles (usec): 00:23:07.642 | 1.00th=[ 1876], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2442], 00:23:07.642 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2802], 00:23:07.642 | 70.00th=[ 3064], 80.00th=[ 3458], 90.00th=[ 4146], 95.00th=[ 4948], 00:23:07.642 | 99.00th=[ 6521], 99.50th=[ 7177], 99.90th=[63701], 99.95th=[93848], 00:23:07.642 | 99.99th=[95945] 00:23:07.642 bw ( KiB/s): min=55952, max=95632, per=98.12%, avg=76477.33, stdev=19875.48, samples=3 00:23:07.642 iops : min=13988, max=23908, avg=19119.33, stdev=4968.87, samples=3 00:23:07.642 write: IOPS=19.4k, BW=76.0MiB/s (79.7MB/s)(152MiB/2001msec); 0 zone resets 00:23:07.642 slat (nsec): min=3425, max=86531, avg=5598.85, stdev=2322.31 00:23:07.642 clat (usec): min=293, max=101970, avg=3428.77, stdev=5728.70 00:23:07.642 lat (usec): min=298, max=101974, avg=3434.37, stdev=5728.88 00:23:07.642 clat percentiles (usec): 00:23:07.642 | 1.00th=[ 1958], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2442], 00:23:07.642 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2835], 00:23:07.642 | 70.00th=[ 3097], 80.00th=[ 3523], 90.00th=[ 4228], 95.00th=[ 5080], 00:23:07.642 | 99.00th=[ 7046], 99.50th=[ 16909], 99.90th=[ 99091], 99.95th=[100140], 00:23:07.642 | 99.99th=[102237] 00:23:07.642 bw ( KiB/s): min=56095, max=95488, per=98.47%, avg=76610.33, stdev=19747.50, samples=3 00:23:07.642 iops : min=14023, max=23872, avg=19152.33, stdev=4937.26, samples=3 00:23:07.642 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:23:07.642 lat (msec) : 2=1.54%, 4=86.64%, 10=11.35%, 20=0.10%, 100=0.29% 00:23:07.642 lat (msec) : 250=0.04% 00:23:07.642 cpu : usr=99.10%, sys=0.00%, ctx=4, majf=0, minf=605 00:23:07.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:07.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:07.642 issued rwts: total=38992,38919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:07.642 00:23:07.642 Run status group 0 (all jobs): 00:23:07.642 READ: bw=76.1MiB/s (79.8MB/s), 76.1MiB/s-76.1MiB/s (79.8MB/s-79.8MB/s), io=152MiB (160MB), run=2001-2001msec 00:23:07.642 WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=152MiB (159MB), run=2001-2001msec 00:23:07.642 ----------------------------------------------------- 00:23:07.642 Suppressions used: 00:23:07.642 count bytes template 00:23:07.642 1 32 /usr/src/fio/parse.c 00:23:07.642 1 8 libtcmalloc_minimal.so 00:23:07.642 ----------------------------------------------------- 00:23:07.642 00:23:07.642 ************************************ 00:23:07.642 END TEST nvme_fio 00:23:07.642 ************************************ 00:23:07.642 20:21:01 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:23:07.642 20:21:01 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:23:07.642 00:23:07.642 real 0m47.543s 00:23:07.642 user 0m40.339s 00:23:07.642 sys 0m9.427s 00:23:07.642 20:21:01 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.642 20:21:01 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:23:07.642 ************************************ 00:23:07.642 END TEST nvme 00:23:07.642 ************************************ 00:23:07.642 00:23:07.642 real 1m58.570s 00:23:07.642 user 4m5.589s 00:23:07.642 sys 0m20.068s 00:23:07.642 20:21:01 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.642 20:21:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:07.642 20:21:01 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:23:07.642 20:21:01 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:23:07.642 20:21:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:07.642 20:21:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:07.642 20:21:01 -- common/autotest_common.sh@10 -- # set +x 00:23:07.642 ************************************ 00:23:07.642 START TEST nvme_scc 00:23:07.642 ************************************ 00:23:07.642 20:21:01 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:23:07.642 * Looking for test storage... 00:23:07.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:07.642 20:21:01 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:07.642 20:21:01 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:23:07.642 20:21:01 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:07.642 20:21:01 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@345 -- # : 1 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@368 -- # return 0 00:23:07.642 20:21:01 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.642 20:21:01 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:07.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.642 --rc genhtml_branch_coverage=1 00:23:07.642 --rc genhtml_function_coverage=1 00:23:07.642 --rc genhtml_legend=1 00:23:07.642 --rc geninfo_all_blocks=1 00:23:07.642 --rc geninfo_unexecuted_blocks=1 00:23:07.642 00:23:07.642 ' 00:23:07.642 20:21:01 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:07.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.642 --rc genhtml_branch_coverage=1 00:23:07.642 --rc genhtml_function_coverage=1 00:23:07.642 --rc genhtml_legend=1 00:23:07.642 --rc geninfo_all_blocks=1 00:23:07.642 --rc geninfo_unexecuted_blocks=1 00:23:07.642 00:23:07.642 ' 00:23:07.642 20:21:01 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:07.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.642 --rc genhtml_branch_coverage=1 00:23:07.642 --rc genhtml_function_coverage=1 00:23:07.642 --rc genhtml_legend=1 00:23:07.642 --rc geninfo_all_blocks=1 00:23:07.642 --rc geninfo_unexecuted_blocks=1 00:23:07.642 00:23:07.642 ' 00:23:07.642 20:21:01 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:07.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.642 --rc genhtml_branch_coverage=1 00:23:07.642 --rc genhtml_function_coverage=1 00:23:07.642 --rc genhtml_legend=1 00:23:07.642 --rc geninfo_all_blocks=1 00:23:07.642 --rc geninfo_unexecuted_blocks=1 00:23:07.642 00:23:07.642 ' 00:23:07.642 20:21:01 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:07.642 20:21:01 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:07.642 20:21:01 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:23:07.642 20:21:01 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:07.642 20:21:01 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.642 20:21:01 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.642 20:21:01 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.642 20:21:01 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.642 20:21:01 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.642 20:21:01 nvme_scc -- paths/export.sh@5 -- # export PATH 00:23:07.642 20:21:01 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.642 20:21:01 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:23:07.642 20:21:01 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:23:07.642 20:21:01 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:23:07.642 20:21:01 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:23:07.642 20:21:01 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:23:07.642 20:21:01 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:23:07.642 20:21:01 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:23:07.642 20:21:01 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:23:07.642 20:21:01 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:23:07.642 20:21:01 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:07.642 20:21:01 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:23:07.642 20:21:01 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:23:07.642 20:21:01 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:23:07.642 20:21:01 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:07.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:07.642 Waiting for block devices as requested 00:23:07.642 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:07.642 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:07.642 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:23:07.642 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:23:12.912 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:23:12.912 20:21:07 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:23:12.912 20:21:07 nvme_scc -- scripts/common.sh@18 -- # local i 00:23:12.912 20:21:07 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:23:12.912 20:21:07 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:12.912 20:21:07 nvme_scc -- scripts/common.sh@27 -- # return 0 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.912 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:23:12.913 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.914 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:23:12.915 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:23:12.916 20:21:07 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:23:12.916 20:21:07 nvme_scc -- scripts/common.sh@18 -- # local i 00:23:12.916 20:21:07 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:23:12.917 20:21:07 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:12.917 20:21:07 nvme_scc -- scripts/common.sh@27 -- # return 0 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:23:12.917 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.918 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:12.919 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.920 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:23:12.921 20:21:07 nvme_scc -- scripts/common.sh@18 -- # local i 00:23:12.921 20:21:07 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:23:12.921 20:21:07 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:12.921 20:21:07 nvme_scc -- scripts/common.sh@27 -- # return 0 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.921 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:23:12.922 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:23:12.923 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:23:12.924 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:12.925 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.926 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:23:12.927 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.928 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:23:12.929 20:21:07 nvme_scc -- scripts/common.sh@18 -- # local i 00:23:12.929 20:21:07 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:23:12.929 20:21:07 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:12.929 20:21:07 nvme_scc -- scripts/common.sh@27 -- # return 0 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:23:12.929 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.930 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:23:12.931 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:23:12.932 20:21:07 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:23:12.932 20:21:07 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:23:12.933 20:21:07 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:23:12.933 20:21:07 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:23:12.933 20:21:07 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:23:12.933 20:21:07 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:23:12.933 20:21:07 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:23:12.933 20:21:07 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:23:12.933 20:21:07 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:23:12.933 20:21:07 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:23:12.933 20:21:07 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:23:12.933 20:21:07 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:23:12.933 20:21:07 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:23:12.933 20:21:07 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:12.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:13.499 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:13.499 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:13.499 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:13.499 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:13.499 20:21:08 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:23:13.499 20:21:08 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:13.499 20:21:08 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:13.499 20:21:08 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:23:13.499 ************************************ 00:23:13.499 START TEST nvme_simple_copy 00:23:13.499 ************************************ 00:23:13.499 20:21:08 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:23:13.757 Initializing NVMe Controllers 00:23:13.757 Attaching to 0000:00:10.0 00:23:13.757 Controller supports SCC. Attached to 0000:00:10.0 00:23:13.757 Namespace ID: 1 size: 6GB 00:23:13.757 Initialization complete. 00:23:13.757 00:23:13.757 Controller QEMU NVMe Ctrl (12340 ) 00:23:13.757 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:23:13.757 Namespace Block Size:4096 00:23:13.757 Writing LBAs 0 to 63 with Random Data 00:23:13.757 Copied LBAs from 0 - 63 to the Destination LBA 256 00:23:13.757 LBAs matching Written Data: 64 00:23:13.757 00:23:13.757 real 0m0.242s 00:23:13.757 user 0m0.085s 00:23:13.757 sys 0m0.055s 00:23:13.757 20:21:08 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:13.757 20:21:08 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:23:13.757 ************************************ 00:23:13.757 END TEST nvme_simple_copy 00:23:13.757 ************************************ 00:23:13.757 ************************************ 00:23:13.757 END TEST nvme_scc 00:23:13.757 ************************************ 00:23:13.757 00:23:13.758 real 0m7.401s 00:23:13.758 user 0m0.969s 00:23:13.758 sys 0m1.298s 00:23:13.758 20:21:08 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:13.758 20:21:08 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:23:13.758 20:21:08 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:23:13.758 20:21:08 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:23:13.758 20:21:08 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:23:13.758 20:21:08 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:23:13.758 20:21:08 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:23:13.758 20:21:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:13.758 20:21:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:13.758 20:21:08 -- common/autotest_common.sh@10 -- # set +x 00:23:13.758 ************************************ 00:23:13.758 START TEST nvme_fdp 00:23:13.758 ************************************ 00:23:13.758 20:21:08 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:23:13.758 * Looking for test storage... 00:23:13.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:13.758 20:21:08 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:13.758 20:21:08 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:23:13.758 20:21:08 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:14.017 20:21:09 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:23:14.017 20:21:09 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.017 20:21:09 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:14.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.017 --rc genhtml_branch_coverage=1 00:23:14.017 --rc genhtml_function_coverage=1 00:23:14.017 --rc genhtml_legend=1 00:23:14.017 --rc geninfo_all_blocks=1 00:23:14.017 --rc geninfo_unexecuted_blocks=1 00:23:14.017 00:23:14.017 ' 00:23:14.017 20:21:09 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:14.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.017 --rc genhtml_branch_coverage=1 00:23:14.017 --rc genhtml_function_coverage=1 00:23:14.017 --rc genhtml_legend=1 00:23:14.017 --rc geninfo_all_blocks=1 00:23:14.017 --rc geninfo_unexecuted_blocks=1 00:23:14.017 00:23:14.017 ' 00:23:14.017 20:21:09 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:14.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.017 --rc genhtml_branch_coverage=1 00:23:14.017 --rc genhtml_function_coverage=1 00:23:14.017 --rc genhtml_legend=1 00:23:14.017 --rc geninfo_all_blocks=1 00:23:14.017 --rc geninfo_unexecuted_blocks=1 00:23:14.017 00:23:14.017 ' 00:23:14.017 20:21:09 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:14.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.017 --rc genhtml_branch_coverage=1 00:23:14.017 --rc genhtml_function_coverage=1 00:23:14.017 --rc genhtml_legend=1 00:23:14.017 --rc geninfo_all_blocks=1 00:23:14.017 --rc geninfo_unexecuted_blocks=1 00:23:14.017 00:23:14.017 ' 00:23:14.017 20:21:09 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:14.017 20:21:09 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:14.017 20:21:09 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:23:14.017 20:21:09 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:14.017 20:21:09 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.017 20:21:09 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.017 20:21:09 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.018 20:21:09 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.018 20:21:09 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.018 20:21:09 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:23:14.018 20:21:09 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.018 20:21:09 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:23:14.018 20:21:09 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:23:14.018 20:21:09 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:23:14.018 20:21:09 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:23:14.018 20:21:09 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:23:14.018 20:21:09 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:23:14.018 20:21:09 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:23:14.018 20:21:09 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:23:14.018 20:21:09 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:23:14.018 20:21:09 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:14.018 20:21:09 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:14.277 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:14.277 Waiting for block devices as requested 00:23:14.277 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:14.277 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:14.535 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:23:14.535 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:23:19.809 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:23:19.809 20:21:14 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:23:19.809 20:21:14 nvme_fdp -- scripts/common.sh@18 -- # local i 00:23:19.809 20:21:14 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:23:19.809 20:21:14 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:19.809 20:21:14 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:19.809 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.810 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.811 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.812 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:23:19.813 20:21:14 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:23:19.813 20:21:14 nvme_fdp -- scripts/common.sh@18 -- # local i 00:23:19.813 20:21:14 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:23:19.814 20:21:14 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:19.814 20:21:14 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.814 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.815 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:23:19.816 20:21:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:23:19.817 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:23:19.818 20:21:14 nvme_fdp -- scripts/common.sh@18 -- # local i 00:23:19.818 20:21:14 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:23:19.818 20:21:14 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:19.818 20:21:14 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.818 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:23:19.819 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.820 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.821 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.822 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:23:19.823 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.824 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:19.825 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:23:19.826 20:21:14 nvme_fdp -- scripts/common.sh@18 -- # local i 00:23:19.826 20:21:14 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:23:19.826 20:21:14 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:19.826 20:21:14 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:23:19.826 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:23:19.827 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:20.087 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:23:20.088 20:21:15 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:23:20.088 20:21:15 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:23:20.089 20:21:15 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:23:20.089 20:21:15 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:23:20.089 20:21:15 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:23:20.089 20:21:15 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:20.346 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:20.912 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:20.912 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:20.912 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:20.912 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:20.912 20:21:15 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:23:20.913 20:21:15 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:20.913 20:21:15 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:20.913 20:21:15 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:23:20.913 ************************************ 00:23:20.913 START TEST nvme_flexible_data_placement 00:23:20.913 ************************************ 00:23:20.913 20:21:15 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:23:21.171 Initializing NVMe Controllers 00:23:21.171 Attaching to 0000:00:13.0 00:23:21.171 Controller supports FDP Attached to 0000:00:13.0 00:23:21.171 Namespace ID: 1 Endurance Group ID: 1 00:23:21.171 Initialization complete. 00:23:21.171 00:23:21.171 ================================== 00:23:21.171 == FDP tests for Namespace: #01 == 00:23:21.171 ================================== 00:23:21.171 00:23:21.171 Get Feature: FDP: 00:23:21.171 ================= 00:23:21.171 Enabled: Yes 00:23:21.171 FDP configuration Index: 0 00:23:21.171 00:23:21.171 FDP configurations log page 00:23:21.171 =========================== 00:23:21.171 Number of FDP configurations: 1 00:23:21.171 Version: 0 00:23:21.171 Size: 112 00:23:21.171 FDP Configuration Descriptor: 0 00:23:21.171 Descriptor Size: 96 00:23:21.171 Reclaim Group Identifier format: 2 00:23:21.171 FDP Volatile Write Cache: Not Present 00:23:21.171 FDP Configuration: Valid 00:23:21.171 Vendor Specific Size: 0 00:23:21.171 Number of Reclaim Groups: 2 00:23:21.171 Number of Recalim Unit Handles: 8 00:23:21.171 Max Placement Identifiers: 128 00:23:21.171 Number of Namespaces Suppprted: 256 00:23:21.171 Reclaim unit Nominal Size: 6000000 bytes 00:23:21.171 Estimated Reclaim Unit Time Limit: Not Reported 00:23:21.171 RUH Desc #000: RUH Type: Initially Isolated 00:23:21.171 RUH Desc #001: RUH Type: Initially Isolated 00:23:21.171 RUH Desc #002: RUH Type: Initially Isolated 00:23:21.171 RUH Desc #003: RUH Type: Initially Isolated 00:23:21.171 RUH Desc #004: RUH Type: Initially Isolated 00:23:21.171 RUH Desc #005: RUH Type: Initially Isolated 00:23:21.171 RUH Desc #006: RUH Type: Initially Isolated 00:23:21.171 RUH Desc #007: RUH Type: Initially Isolated 00:23:21.171 00:23:21.171 FDP reclaim unit handle usage log page 00:23:21.171 ====================================== 00:23:21.171 Number of Reclaim Unit Handles: 8 00:23:21.171 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:23:21.171 RUH Usage Desc #001: RUH Attributes: Unused 00:23:21.171 RUH Usage Desc #002: RUH Attributes: Unused 00:23:21.171 RUH Usage Desc #003: RUH Attributes: Unused 00:23:21.171 RUH Usage Desc #004: RUH Attributes: Unused 00:23:21.171 RUH Usage Desc #005: RUH Attributes: Unused 00:23:21.171 RUH Usage Desc #006: RUH Attributes: Unused 00:23:21.171 RUH Usage Desc #007: RUH Attributes: Unused 00:23:21.171 00:23:21.171 FDP statistics log page 00:23:21.171 ======================= 00:23:21.171 Host bytes with metadata written: 1038098432 00:23:21.171 Media bytes with metadata written: 1038209024 00:23:21.171 Media bytes erased: 0 00:23:21.171 00:23:21.171 FDP Reclaim unit handle status 00:23:21.171 ============================== 00:23:21.171 Number of RUHS descriptors: 2 00:23:21.171 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000041fe 00:23:21.171 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:23:21.171 00:23:21.171 FDP write on placement id: 0 success 00:23:21.171 00:23:21.171 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:23:21.171 00:23:21.171 IO mgmt send: RUH update for Placement ID: #0 Success 00:23:21.171 00:23:21.171 Get Feature: FDP Events for Placement handle: #0 00:23:21.171 ======================== 00:23:21.171 Number of FDP Events: 6 00:23:21.171 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:23:21.171 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:23:21.171 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:23:21.171 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:23:21.171 FDP Event: #4 Type: Media Reallocated Enabled: No 00:23:21.171 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:23:21.171 00:23:21.171 FDP events log page 00:23:21.171 =================== 00:23:21.171 Number of FDP events: 1 00:23:21.171 FDP Event #0: 00:23:21.171 Event Type: RU Not Written to Capacity 00:23:21.171 Placement Identifier: Valid 00:23:21.171 NSID: Valid 00:23:21.171 Location: Valid 00:23:21.171 Placement Identifier: 0 00:23:21.171 Event Timestamp: 4 00:23:21.171 Namespace Identifier: 1 00:23:21.171 Reclaim Group Identifier: 0 00:23:21.171 Reclaim Unit Handle Identifier: 0 00:23:21.171 00:23:21.171 FDP test passed 00:23:21.171 ************************************ 00:23:21.171 END TEST nvme_flexible_data_placement 00:23:21.171 ************************************ 00:23:21.171 00:23:21.171 real 0m0.209s 00:23:21.171 user 0m0.055s 00:23:21.171 sys 0m0.053s 00:23:21.171 20:21:16 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:21.171 20:21:16 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:23:21.171 ************************************ 00:23:21.171 END TEST nvme_fdp 00:23:21.171 ************************************ 00:23:21.171 00:23:21.171 real 0m7.319s 00:23:21.171 user 0m0.969s 00:23:21.171 sys 0m1.247s 00:23:21.171 20:21:16 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:21.171 20:21:16 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:23:21.171 20:21:16 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:23:21.171 20:21:16 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:23:21.171 20:21:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:21.171 20:21:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:21.171 20:21:16 -- common/autotest_common.sh@10 -- # set +x 00:23:21.171 ************************************ 00:23:21.171 START TEST nvme_rpc 00:23:21.171 ************************************ 00:23:21.171 20:21:16 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:23:21.171 * Looking for test storage... 00:23:21.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:21.171 20:21:16 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:21.171 20:21:16 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:23:21.171 20:21:16 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:21.171 20:21:16 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.171 20:21:16 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:23:21.431 20:21:16 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.431 20:21:16 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:23:21.431 20:21:16 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:23:21.431 20:21:16 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.431 20:21:16 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:23:21.431 20:21:16 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.431 20:21:16 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.431 20:21:16 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.431 20:21:16 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:21.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.431 --rc genhtml_branch_coverage=1 00:23:21.431 --rc genhtml_function_coverage=1 00:23:21.431 --rc genhtml_legend=1 00:23:21.431 --rc geninfo_all_blocks=1 00:23:21.431 --rc geninfo_unexecuted_blocks=1 00:23:21.431 00:23:21.431 ' 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:21.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.431 --rc genhtml_branch_coverage=1 00:23:21.431 --rc genhtml_function_coverage=1 00:23:21.431 --rc genhtml_legend=1 00:23:21.431 --rc geninfo_all_blocks=1 00:23:21.431 --rc geninfo_unexecuted_blocks=1 00:23:21.431 00:23:21.431 ' 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:21.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.431 --rc genhtml_branch_coverage=1 00:23:21.431 --rc genhtml_function_coverage=1 00:23:21.431 --rc genhtml_legend=1 00:23:21.431 --rc geninfo_all_blocks=1 00:23:21.431 --rc geninfo_unexecuted_blocks=1 00:23:21.431 00:23:21.431 ' 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:21.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.431 --rc genhtml_branch_coverage=1 00:23:21.431 --rc genhtml_function_coverage=1 00:23:21.431 --rc genhtml_legend=1 00:23:21.431 --rc geninfo_all_blocks=1 00:23:21.431 --rc geninfo_unexecuted_blocks=1 00:23:21.431 00:23:21.431 ' 00:23:21.431 20:21:16 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:21.431 20:21:16 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:23:21.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.431 20:21:16 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:23:21.431 20:21:16 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66415 00:23:21.431 20:21:16 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:23:21.431 20:21:16 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:23:21.431 20:21:16 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66415 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 66415 ']' 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:21.431 20:21:16 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:21.431 [2024-10-01 20:21:16.499010] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:23:21.431 [2024-10-01 20:21:16.499129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66415 ] 00:23:21.692 [2024-10-01 20:21:16.644631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:21.692 [2024-10-01 20:21:16.838094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.692 [2024-10-01 20:21:16.838526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.634 20:21:17 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:22.634 20:21:17 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:23:22.634 20:21:17 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:23:22.893 Nvme0n1 00:23:22.893 20:21:17 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:23:22.893 20:21:17 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:23:22.893 request: 00:23:22.893 { 00:23:22.893 "bdev_name": "Nvme0n1", 00:23:22.893 "filename": "non_existing_file", 00:23:22.893 "method": "bdev_nvme_apply_firmware", 00:23:22.893 "req_id": 1 00:23:22.893 } 00:23:22.893 Got JSON-RPC error response 00:23:22.893 response: 00:23:22.893 { 00:23:22.893 "code": -32603, 00:23:22.893 "message": "open file failed." 00:23:22.893 } 00:23:22.893 20:21:18 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:23:22.893 20:21:18 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:23:22.893 20:21:18 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:23:23.152 20:21:18 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:23:23.152 20:21:18 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66415 00:23:23.152 20:21:18 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 66415 ']' 00:23:23.152 20:21:18 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 66415 00:23:23.152 20:21:18 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:23:23.152 20:21:18 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:23.152 20:21:18 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66415 00:23:23.152 killing process with pid 66415 00:23:23.152 20:21:18 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:23.152 20:21:18 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:23.152 20:21:18 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66415' 00:23:23.152 20:21:18 nvme_rpc -- common/autotest_common.sh@969 -- # kill 66415 00:23:23.152 20:21:18 nvme_rpc -- common/autotest_common.sh@974 -- # wait 66415 00:23:25.680 ************************************ 00:23:25.680 END TEST nvme_rpc 00:23:25.680 ************************************ 00:23:25.680 00:23:25.680 real 0m4.105s 00:23:25.680 user 0m7.570s 00:23:25.680 sys 0m0.577s 00:23:25.680 20:21:20 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:25.680 20:21:20 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:25.680 20:21:20 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:23:25.680 20:21:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:25.680 20:21:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:25.680 20:21:20 -- common/autotest_common.sh@10 -- # set +x 00:23:25.680 ************************************ 00:23:25.680 START TEST nvme_rpc_timeouts 00:23:25.680 ************************************ 00:23:25.680 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:23:25.680 * Looking for test storage... 00:23:25.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:25.680 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:25.680 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:23:25.680 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:25.680 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:23:25.680 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:23:25.681 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.681 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:23:25.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.681 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.681 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.681 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.681 20:21:20 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:23:25.681 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.681 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:25.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.681 --rc genhtml_branch_coverage=1 00:23:25.681 --rc genhtml_function_coverage=1 00:23:25.681 --rc genhtml_legend=1 00:23:25.681 --rc geninfo_all_blocks=1 00:23:25.681 --rc geninfo_unexecuted_blocks=1 00:23:25.681 00:23:25.681 ' 00:23:25.681 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:25.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.681 --rc genhtml_branch_coverage=1 00:23:25.681 --rc genhtml_function_coverage=1 00:23:25.681 --rc genhtml_legend=1 00:23:25.681 --rc geninfo_all_blocks=1 00:23:25.681 --rc geninfo_unexecuted_blocks=1 00:23:25.681 00:23:25.681 ' 00:23:25.681 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:25.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.681 --rc genhtml_branch_coverage=1 00:23:25.681 --rc genhtml_function_coverage=1 00:23:25.681 --rc genhtml_legend=1 00:23:25.681 --rc geninfo_all_blocks=1 00:23:25.681 --rc geninfo_unexecuted_blocks=1 00:23:25.681 00:23:25.681 ' 00:23:25.681 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:25.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.681 --rc genhtml_branch_coverage=1 00:23:25.681 --rc genhtml_function_coverage=1 00:23:25.681 --rc genhtml_legend=1 00:23:25.681 --rc geninfo_all_blocks=1 00:23:25.681 --rc geninfo_unexecuted_blocks=1 00:23:25.681 00:23:25.681 ' 00:23:25.681 20:21:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:25.681 20:21:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66485 00:23:25.681 20:21:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66485 00:23:25.681 20:21:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66523 00:23:25.681 20:21:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:23:25.681 20:21:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:23:25.681 20:21:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66523 00:23:25.681 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 66523 ']' 00:23:25.681 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.681 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.681 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.681 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.681 20:21:20 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:23:25.681 [2024-10-01 20:21:20.600831] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:23:25.681 [2024-10-01 20:21:20.600960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66523 ] 00:23:25.681 [2024-10-01 20:21:20.749928] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:25.939 [2024-10-01 20:21:20.924376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.939 [2024-10-01 20:21:20.924386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.504 Checking default timeout settings: 00:23:26.504 20:21:21 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.504 20:21:21 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:23:26.504 20:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:23:26.504 20:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:26.763 Making settings changes with rpc: 00:23:26.763 20:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:23:26.763 20:21:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:23:27.022 Check default vs. modified settings: 00:23:27.022 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:23:27.022 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66485 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66485 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:27.281 Setting action_on_timeout is changed as expected. 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66485 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66485 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:27.281 Setting timeout_us is changed as expected. 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66485 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66485 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:27.281 Setting timeout_admin_us is changed as expected. 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66485 /tmp/settings_modified_66485 00:23:27.281 20:21:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66523 00:23:27.281 20:21:22 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 66523 ']' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 66523 00:23:27.281 20:21:22 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:23:27.281 20:21:22 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66523 00:23:27.281 killing process with pid 66523 00:23:27.281 20:21:22 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:27.281 20:21:22 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66523' 00:23:27.281 20:21:22 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 66523 00:23:27.281 20:21:22 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 66523 00:23:29.182 RPC TIMEOUT SETTING TEST PASSED. 00:23:29.182 20:21:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:23:29.182 ************************************ 00:23:29.182 END TEST nvme_rpc_timeouts 00:23:29.182 ************************************ 00:23:29.182 00:23:29.182 real 0m3.776s 00:23:29.182 user 0m7.004s 00:23:29.182 sys 0m0.566s 00:23:29.182 20:21:24 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:29.182 20:21:24 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:23:29.182 20:21:24 -- spdk/autotest.sh@239 -- # uname -s 00:23:29.182 20:21:24 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:23:29.182 20:21:24 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:23:29.182 20:21:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:29.182 20:21:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:29.182 20:21:24 -- common/autotest_common.sh@10 -- # set +x 00:23:29.182 ************************************ 00:23:29.182 START TEST sw_hotplug 00:23:29.182 ************************************ 00:23:29.182 20:21:24 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:23:29.182 * Looking for test storage... 00:23:29.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:29.182 20:21:24 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:29.182 20:21:24 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:23:29.182 20:21:24 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:29.182 20:21:24 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.182 20:21:24 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:23:29.182 20:21:24 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.182 20:21:24 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:29.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.182 --rc genhtml_branch_coverage=1 00:23:29.182 --rc genhtml_function_coverage=1 00:23:29.182 --rc genhtml_legend=1 00:23:29.182 --rc geninfo_all_blocks=1 00:23:29.182 --rc geninfo_unexecuted_blocks=1 00:23:29.182 00:23:29.182 ' 00:23:29.182 20:21:24 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:29.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.182 --rc genhtml_branch_coverage=1 00:23:29.182 --rc genhtml_function_coverage=1 00:23:29.182 --rc genhtml_legend=1 00:23:29.182 --rc geninfo_all_blocks=1 00:23:29.182 --rc geninfo_unexecuted_blocks=1 00:23:29.182 00:23:29.182 ' 00:23:29.182 20:21:24 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:29.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.182 --rc genhtml_branch_coverage=1 00:23:29.182 --rc genhtml_function_coverage=1 00:23:29.182 --rc genhtml_legend=1 00:23:29.182 --rc geninfo_all_blocks=1 00:23:29.182 --rc geninfo_unexecuted_blocks=1 00:23:29.182 00:23:29.182 ' 00:23:29.182 20:21:24 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:29.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.182 --rc genhtml_branch_coverage=1 00:23:29.182 --rc genhtml_function_coverage=1 00:23:29.182 --rc genhtml_legend=1 00:23:29.182 --rc geninfo_all_blocks=1 00:23:29.182 --rc geninfo_unexecuted_blocks=1 00:23:29.182 00:23:29.182 ' 00:23:29.182 20:21:24 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:29.441 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:29.698 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.698 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.698 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.698 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.698 20:21:24 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:23:29.698 20:21:24 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:23:29.698 20:21:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:23:29.698 20:21:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:23:29.698 20:21:24 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:23:29.698 20:21:24 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:23:29.698 20:21:24 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:23:29.698 20:21:24 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@233 -- # local class 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:23:29.699 20:21:24 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:23:29.699 20:21:24 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:23:29.699 20:21:24 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:23:29.699 20:21:24 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:29.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:30.218 Waiting for block devices as requested 00:23:30.218 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:30.218 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:30.218 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:23:30.218 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:23:35.517 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:23:35.517 20:21:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:23:35.517 20:21:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:35.828 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:23:35.828 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:35.828 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:23:36.092 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:23:36.092 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:36.092 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:36.351 20:21:31 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:23:36.351 20:21:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:36.351 20:21:31 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:23:36.351 20:21:31 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:23:36.351 20:21:31 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67373 00:23:36.351 20:21:31 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:23:36.351 20:21:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:23:36.351 20:21:31 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:23:36.351 20:21:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:23:36.351 20:21:31 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:23:36.351 20:21:31 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:23:36.351 20:21:31 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:23:36.351 20:21:31 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:23:36.351 20:21:31 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:23:36.351 20:21:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:23:36.351 20:21:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:23:36.351 20:21:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:23:36.351 20:21:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:23:36.351 20:21:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:23:36.609 Initializing NVMe Controllers 00:23:36.609 Attaching to 0000:00:10.0 00:23:36.609 Attaching to 0000:00:11.0 00:23:36.609 Attached to 0000:00:11.0 00:23:36.609 Attached to 0000:00:10.0 00:23:36.609 Initialization complete. Starting I/O... 00:23:36.609 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:23:36.609 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:23:36.609 00:23:37.543 QEMU NVMe Ctrl (12341 ): 2989 I/Os completed (+2989) 00:23:37.543 QEMU NVMe Ctrl (12340 ): 2974 I/Os completed (+2974) 00:23:37.543 00:23:38.475 QEMU NVMe Ctrl (12341 ): 6620 I/Os completed (+3631) 00:23:38.475 QEMU NVMe Ctrl (12340 ): 6653 I/Os completed (+3679) 00:23:38.475 00:23:39.851 QEMU NVMe Ctrl (12341 ): 9899 I/Os completed (+3279) 00:23:39.851 QEMU NVMe Ctrl (12340 ): 10105 I/Os completed (+3452) 00:23:39.851 00:23:40.812 QEMU NVMe Ctrl (12341 ): 13288 I/Os completed (+3389) 00:23:40.812 QEMU NVMe Ctrl (12340 ): 13550 I/Os completed (+3445) 00:23:40.812 00:23:41.744 QEMU NVMe Ctrl (12341 ): 16715 I/Os completed (+3427) 00:23:41.744 QEMU NVMe Ctrl (12340 ): 17355 I/Os completed (+3805) 00:23:41.744 00:23:42.310 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:42.310 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:42.310 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:42.310 [2024-10-01 20:21:37.450457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:23:42.310 Controller removed: QEMU NVMe Ctrl (12340 ) 00:23:42.310 [2024-10-01 20:21:37.451598] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 [2024-10-01 20:21:37.451647] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 [2024-10-01 20:21:37.451662] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 [2024-10-01 20:21:37.451676] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:23:42.310 [2024-10-01 20:21:37.453186] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 [2024-10-01 20:21:37.453226] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 [2024-10-01 20:21:37.453238] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 [2024-10-01 20:21:37.453252] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:42.310 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:42.310 [2024-10-01 20:21:37.472486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:23:42.310 Controller removed: QEMU NVMe Ctrl (12341 ) 00:23:42.310 [2024-10-01 20:21:37.473386] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 [2024-10-01 20:21:37.473424] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 [2024-10-01 20:21:37.473441] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 [2024-10-01 20:21:37.473456] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:23:42.310 [2024-10-01 20:21:37.474921] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 [2024-10-01 20:21:37.474956] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 [2024-10-01 20:21:37.474968] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 [2024-10-01 20:21:37.474979] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:42.310 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:23:42.310 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:42.567 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:42.567 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:42.567 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:42.567 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:42.567 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:42.567 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:42.567 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:42.567 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:23:42.567 Attaching to 0000:00:10.0 00:23:42.567 Attached to 0000:00:10.0 00:23:42.567 QEMU NVMe Ctrl (12340 ): 48 I/Os completed (+48) 00:23:42.567 00:23:42.567 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:23:42.567 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:42.567 20:21:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:42.567 Attaching to 0000:00:11.0 00:23:42.567 Attached to 0000:00:11.0 00:23:43.499 QEMU NVMe Ctrl (12340 ): 3441 I/Os completed (+3393) 00:23:43.499 QEMU NVMe Ctrl (12341 ): 3360 I/Os completed (+3360) 00:23:43.499 00:23:44.432 QEMU NVMe Ctrl (12340 ): 6668 I/Os completed (+3227) 00:23:44.432 QEMU NVMe Ctrl (12341 ): 6759 I/Os completed (+3399) 00:23:44.432 00:23:45.862 QEMU NVMe Ctrl (12340 ): 10127 I/Os completed (+3459) 00:23:45.862 QEMU NVMe Ctrl (12341 ): 10506 I/Os completed (+3747) 00:23:45.862 00:23:46.428 QEMU NVMe Ctrl (12340 ): 13606 I/Os completed (+3479) 00:23:46.428 QEMU NVMe Ctrl (12341 ): 14044 I/Os completed (+3538) 00:23:46.428 00:23:47.800 QEMU NVMe Ctrl (12340 ): 17354 I/Os completed (+3748) 00:23:47.800 QEMU NVMe Ctrl (12341 ): 17804 I/Os completed (+3760) 00:23:47.800 00:23:48.735 QEMU NVMe Ctrl (12340 ): 20850 I/Os completed (+3496) 00:23:48.735 QEMU NVMe Ctrl (12341 ): 21519 I/Os completed (+3715) 00:23:48.735 00:23:49.671 QEMU NVMe Ctrl (12340 ): 24118 I/Os completed (+3268) 00:23:49.671 QEMU NVMe Ctrl (12341 ): 24997 I/Os completed (+3478) 00:23:49.671 00:23:50.605 QEMU NVMe Ctrl (12340 ): 27243 I/Os completed (+3125) 00:23:50.605 QEMU NVMe Ctrl (12341 ): 28208 I/Os completed (+3211) 00:23:50.605 00:23:51.558 QEMU NVMe Ctrl (12340 ): 30597 I/Os completed (+3354) 00:23:51.558 QEMU NVMe Ctrl (12341 ): 31522 I/Os completed (+3314) 00:23:51.558 00:23:52.490 QEMU NVMe Ctrl (12340 ): 33778 I/Os completed (+3181) 00:23:52.490 QEMU NVMe Ctrl (12341 ): 34958 I/Os completed (+3436) 00:23:52.490 00:23:53.422 QEMU NVMe Ctrl (12340 ): 37115 I/Os completed (+3337) 00:23:53.423 QEMU NVMe Ctrl (12341 ): 38585 I/Os completed (+3627) 00:23:53.423 00:23:54.796 QEMU NVMe Ctrl (12340 ): 40532 I/Os completed (+3417) 00:23:54.796 QEMU NVMe Ctrl (12341 ): 42119 I/Os completed (+3534) 00:23:54.796 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:54.796 [2024-10-01 20:21:49.703379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:23:54.796 Controller removed: QEMU NVMe Ctrl (12340 ) 00:23:54.796 [2024-10-01 20:21:49.704555] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 [2024-10-01 20:21:49.704608] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 [2024-10-01 20:21:49.704625] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 [2024-10-01 20:21:49.704642] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:23:54.796 [2024-10-01 20:21:49.706662] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 [2024-10-01 20:21:49.706724] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 [2024-10-01 20:21:49.706738] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 [2024-10-01 20:21:49.706753] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:23:54.796 EAL: Scan for (pci) bus failed. 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:54.796 [2024-10-01 20:21:49.722773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:23:54.796 Controller removed: QEMU NVMe Ctrl (12341 ) 00:23:54.796 [2024-10-01 20:21:49.723874] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 [2024-10-01 20:21:49.723911] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 [2024-10-01 20:21:49.723932] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 [2024-10-01 20:21:49.723948] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:23:54.796 [2024-10-01 20:21:49.725627] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 [2024-10-01 20:21:49.725664] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 [2024-10-01 20:21:49.725680] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 [2024-10-01 20:21:49.725706] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:54.796 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:23:54.796 EAL: Scan for (pci) bus failed. 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:23:54.796 Attaching to 0000:00:10.0 00:23:54.796 Attached to 0000:00:10.0 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:54.796 20:21:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:54.796 Attaching to 0000:00:11.0 00:23:54.796 Attached to 0000:00:11.0 00:23:55.726 QEMU NVMe Ctrl (12340 ): 2684 I/Os completed (+2684) 00:23:55.726 QEMU NVMe Ctrl (12341 ): 2619 I/Os completed (+2619) 00:23:55.726 00:23:56.659 QEMU NVMe Ctrl (12340 ): 6078 I/Os completed (+3394) 00:23:56.659 QEMU NVMe Ctrl (12341 ): 6125 I/Os completed (+3506) 00:23:56.659 00:23:57.593 QEMU NVMe Ctrl (12340 ): 9490 I/Os completed (+3412) 00:23:57.593 QEMU NVMe Ctrl (12341 ): 9674 I/Os completed (+3549) 00:23:57.593 00:23:58.526 QEMU NVMe Ctrl (12340 ): 12503 I/Os completed (+3013) 00:23:58.526 QEMU NVMe Ctrl (12341 ): 12997 I/Os completed (+3323) 00:23:58.526 00:23:59.539 QEMU NVMe Ctrl (12340 ): 15485 I/Os completed (+2982) 00:23:59.539 QEMU NVMe Ctrl (12341 ): 16109 I/Os completed (+3112) 00:23:59.539 00:24:00.475 QEMU NVMe Ctrl (12340 ): 18551 I/Os completed (+3066) 00:24:00.475 QEMU NVMe Ctrl (12341 ): 19219 I/Os completed (+3110) 00:24:00.475 00:24:01.847 QEMU NVMe Ctrl (12340 ): 21967 I/Os completed (+3416) 00:24:01.847 QEMU NVMe Ctrl (12341 ): 22808 I/Os completed (+3589) 00:24:01.847 00:24:02.778 QEMU NVMe Ctrl (12340 ): 25447 I/Os completed (+3480) 00:24:02.778 QEMU NVMe Ctrl (12341 ): 26289 I/Os completed (+3481) 00:24:02.778 00:24:03.710 QEMU NVMe Ctrl (12340 ): 28513 I/Os completed (+3066) 00:24:03.710 QEMU NVMe Ctrl (12341 ): 29698 I/Os completed (+3409) 00:24:03.710 00:24:04.641 QEMU NVMe Ctrl (12340 ): 31767 I/Os completed (+3254) 00:24:04.641 QEMU NVMe Ctrl (12341 ): 33351 I/Os completed (+3653) 00:24:04.641 00:24:05.573 QEMU NVMe Ctrl (12340 ): 35816 I/Os completed (+4049) 00:24:05.573 QEMU NVMe Ctrl (12341 ): 37532 I/Os completed (+4181) 00:24:05.573 00:24:06.506 QEMU NVMe Ctrl (12340 ): 39153 I/Os completed (+3337) 00:24:06.506 QEMU NVMe Ctrl (12341 ): 40988 I/Os completed (+3456) 00:24:06.506 00:24:06.763 20:22:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:24:06.763 20:22:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:06.763 20:22:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:06.763 20:22:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:06.763 [2024-10-01 20:22:01.970063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:24:06.763 Controller removed: QEMU NVMe Ctrl (12340 ) 00:24:06.763 [2024-10-01 20:22:01.971625] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:06.763 [2024-10-01 20:22:01.971681] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:06.763 [2024-10-01 20:22:01.971718] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:06.763 [2024-10-01 20:22:01.971737] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:06.763 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:24:07.021 [2024-10-01 20:22:01.973761] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.021 [2024-10-01 20:22:01.973816] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.021 [2024-10-01 20:22:01.973836] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.021 [2024-10-01 20:22:01.973856] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.021 20:22:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:07.021 20:22:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:07.021 [2024-10-01 20:22:01.992129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:24:07.021 Controller removed: QEMU NVMe Ctrl (12341 ) 00:24:07.021 [2024-10-01 20:22:01.993217] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.021 [2024-10-01 20:22:01.993263] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.021 [2024-10-01 20:22:01.993281] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.021 [2024-10-01 20:22:01.993296] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.021 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:24:07.021 [2024-10-01 20:22:01.995093] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.021 [2024-10-01 20:22:01.995133] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.021 [2024-10-01 20:22:01.995150] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.021 [2024-10-01 20:22:01.995162] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.021 20:22:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:24:07.021 20:22:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:24:07.021 20:22:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:07.021 20:22:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:07.021 20:22:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:24:07.021 20:22:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:24:07.021 20:22:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:07.021 20:22:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:07.021 20:22:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:07.021 20:22:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:24:07.021 Attaching to 0000:00:10.0 00:24:07.021 Attached to 0000:00:10.0 00:24:07.284 20:22:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:24:07.284 20:22:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:07.284 20:22:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:24:07.284 Attaching to 0000:00:11.0 00:24:07.284 Attached to 0000:00:11.0 00:24:07.284 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:24:07.284 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:24:07.284 [2024-10-01 20:22:02.299164] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:24:19.489 20:22:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:24:19.489 20:22:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:19.489 20:22:14 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.85 00:24:19.489 20:22:14 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.85 00:24:19.489 20:22:14 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:24:19.489 20:22:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.85 00:24:19.489 20:22:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.85 2 00:24:19.489 remove_attach_helper took 42.85s to complete (handling 2 nvme drive(s)) 20:22:14 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:24:26.042 20:22:20 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67373 00:24:26.042 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67373) - No such process 00:24:26.042 20:22:20 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67373 00:24:26.042 20:22:20 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:24:26.042 20:22:20 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:24:26.042 20:22:20 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:24:26.042 20:22:20 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67925 00:24:26.042 20:22:20 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:24:26.042 20:22:20 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67925 00:24:26.042 20:22:20 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 67925 ']' 00:24:26.042 20:22:20 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.042 20:22:20 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.042 20:22:20 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.042 20:22:20 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.042 20:22:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:26.042 20:22:20 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:26.042 [2024-10-01 20:22:20.381359] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:24:26.042 [2024-10-01 20:22:20.381512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67925 ] 00:24:26.042 [2024-10-01 20:22:20.532226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.042 [2024-10-01 20:22:20.706517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.300 20:22:21 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.300 20:22:21 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:24:26.300 20:22:21 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:24:26.300 20:22:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.300 20:22:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:26.300 20:22:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.300 20:22:21 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:24:26.300 20:22:21 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:24:26.300 20:22:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:24:26.300 20:22:21 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:24:26.300 20:22:21 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:24:26.300 20:22:21 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:24:26.300 20:22:21 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:24:26.300 20:22:21 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:24:26.300 20:22:21 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:24:26.300 20:22:21 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:24:26.300 20:22:21 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:24:26.300 20:22:21 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:24:26.300 20:22:21 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:24:32.857 20:22:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:32.857 20:22:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:32.857 20:22:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:32.857 20:22:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:32.857 20:22:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:32.857 20:22:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:24:32.857 20:22:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:32.857 20:22:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:32.857 20:22:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:32.858 20:22:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:32.858 20:22:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:32.858 20:22:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.858 20:22:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:32.858 20:22:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.858 20:22:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:24:32.858 20:22:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:24:32.858 [2024-10-01 20:22:27.591785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:24:32.858 [2024-10-01 20:22:27.593328] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:32.858 [2024-10-01 20:22:27.593374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.858 [2024-10-01 20:22:27.593387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.858 [2024-10-01 20:22:27.593407] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:32.858 [2024-10-01 20:22:27.593415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.858 [2024-10-01 20:22:27.593425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.858 [2024-10-01 20:22:27.593432] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:32.858 [2024-10-01 20:22:27.593441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.858 [2024-10-01 20:22:27.593448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.858 [2024-10-01 20:22:27.593460] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:32.858 [2024-10-01 20:22:27.593467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.858 [2024-10-01 20:22:27.593475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.116 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:24:33.116 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:33.116 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:33.116 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:33.116 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:33.116 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:33.116 20:22:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.116 20:22:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:33.116 [2024-10-01 20:22:28.091779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:24:33.116 [2024-10-01 20:22:28.093197] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:33.116 [2024-10-01 20:22:28.093237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.116 [2024-10-01 20:22:28.093249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.116 [2024-10-01 20:22:28.093267] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:33.116 [2024-10-01 20:22:28.093277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.116 [2024-10-01 20:22:28.093284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.116 [2024-10-01 20:22:28.093344] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:33.116 [2024-10-01 20:22:28.093352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.116 [2024-10-01 20:22:28.093360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.116 [2024-10-01 20:22:28.093368] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:33.116 [2024-10-01 20:22:28.093378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.116 [2024-10-01 20:22:28.093384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.116 20:22:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.116 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:24:33.116 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:33.699 20:22:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.699 20:22:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:33.699 20:22:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:33.699 20:22:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:24:45.914 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:24:45.914 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:24:45.914 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:24:45.914 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:45.914 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:45.914 20:22:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.914 20:22:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:45.914 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:45.914 20:22:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.914 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:24:45.914 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:45.914 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:45.914 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:45.914 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:45.915 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:45.915 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:24:45.915 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:45.915 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:45.915 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:45.915 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:45.915 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:45.915 20:22:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.915 20:22:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:45.915 20:22:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.915 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:24:45.915 20:22:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:24:45.915 [2024-10-01 20:22:40.992025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:24:45.915 [2024-10-01 20:22:40.993413] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:45.915 [2024-10-01 20:22:40.993449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.915 [2024-10-01 20:22:40.993460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.915 [2024-10-01 20:22:40.993480] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:45.915 [2024-10-01 20:22:40.993488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.915 [2024-10-01 20:22:40.993497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.915 [2024-10-01 20:22:40.993505] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:45.915 [2024-10-01 20:22:40.993513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.915 [2024-10-01 20:22:40.993520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.915 [2024-10-01 20:22:40.993528] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:45.915 [2024-10-01 20:22:40.993535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.915 [2024-10-01 20:22:40.993543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.481 [2024-10-01 20:22:41.392034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:24:46.481 [2024-10-01 20:22:41.393421] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:46.481 [2024-10-01 20:22:41.393456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.481 [2024-10-01 20:22:41.393470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.481 [2024-10-01 20:22:41.393488] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:46.481 [2024-10-01 20:22:41.393497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.481 [2024-10-01 20:22:41.393504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.481 [2024-10-01 20:22:41.393513] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:46.481 [2024-10-01 20:22:41.393520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.481 [2024-10-01 20:22:41.393528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.481 [2024-10-01 20:22:41.393535] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:46.481 [2024-10-01 20:22:41.393543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.481 [2024-10-01 20:22:41.393549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:46.481 20:22:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.481 20:22:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:46.481 20:22:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:46.481 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:24:46.739 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:24:46.739 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:46.739 20:22:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:58.934 20:22:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.934 20:22:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:58.934 20:22:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:58.934 20:22:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.934 20:22:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:58.934 20:22:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:24:58.934 20:22:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:24:58.934 [2024-10-01 20:22:53.892288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:24:58.934 [2024-10-01 20:22:53.893802] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:58.934 [2024-10-01 20:22:53.893849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.934 [2024-10-01 20:22:53.893866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.934 [2024-10-01 20:22:53.893891] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:58.935 [2024-10-01 20:22:53.893903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.935 [2024-10-01 20:22:53.893918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.935 [2024-10-01 20:22:53.893930] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:58.935 [2024-10-01 20:22:53.893943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.935 [2024-10-01 20:22:53.893954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.935 [2024-10-01 20:22:53.893969] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:58.935 [2024-10-01 20:22:53.893980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.935 [2024-10-01 20:22:53.893992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.193 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:24:59.193 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:59.193 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:59.193 [2024-10-01 20:22:54.392295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:24:59.193 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:59.193 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:59.193 20:22:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.193 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:59.193 20:22:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:59.193 [2024-10-01 20:22:54.393878] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:59.193 [2024-10-01 20:22:54.393914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.193 [2024-10-01 20:22:54.393933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.193 [2024-10-01 20:22:54.393958] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:59.193 [2024-10-01 20:22:54.393973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.193 [2024-10-01 20:22:54.393985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.193 [2024-10-01 20:22:54.393999] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:59.193 [2024-10-01 20:22:54.394011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.193 [2024-10-01 20:22:54.394027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.193 [2024-10-01 20:22:54.394045] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:59.193 [2024-10-01 20:22:54.394062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.193 [2024-10-01 20:22:54.394073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.451 20:22:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.451 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:24:59.451 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:24:59.451 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:59.451 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:59.451 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:24:59.451 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:24:59.451 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:59.451 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:59.451 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:59.451 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:24:59.710 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:24:59.710 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:59.710 20:22:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.22 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.22 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.22 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.22 2 00:25:11.904 remove_attach_helper took 45.22s to complete (handling 2 nvme drive(s)) 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:25:11.904 20:23:06 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:25:11.904 20:23:06 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:25:18.461 20:23:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:18.461 20:23:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:18.461 20:23:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:18.461 20:23:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:18.461 20:23:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:18.461 20:23:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:25:18.461 20:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:18.461 20:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:18.461 20:23:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:18.461 20:23:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:18.461 20:23:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:18.461 20:23:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.461 20:23:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:18.461 20:23:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.461 20:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:25:18.461 20:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:25:18.461 [2024-10-01 20:23:12.842480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:25:18.461 [2024-10-01 20:23:12.843738] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:18.461 [2024-10-01 20:23:12.843772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.461 [2024-10-01 20:23:12.843783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.461 [2024-10-01 20:23:12.843801] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:18.461 [2024-10-01 20:23:12.843808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.461 [2024-10-01 20:23:12.843818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.461 [2024-10-01 20:23:12.843826] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:18.461 [2024-10-01 20:23:12.843834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.461 [2024-10-01 20:23:12.843840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.461 [2024-10-01 20:23:12.843849] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:18.461 [2024-10-01 20:23:12.843856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.461 [2024-10-01 20:23:12.843866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:18.461 20:23:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:18.461 20:23:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:18.461 [2024-10-01 20:23:13.342493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:25:18.461 [2024-10-01 20:23:13.343859] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:18.461 [2024-10-01 20:23:13.343898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.461 [2024-10-01 20:23:13.343911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.461 [2024-10-01 20:23:13.343927] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:18.461 [2024-10-01 20:23:13.343936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.461 [2024-10-01 20:23:13.343943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.461 [2024-10-01 20:23:13.343953] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:18.461 [2024-10-01 20:23:13.343960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.461 [2024-10-01 20:23:13.343969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.461 [2024-10-01 20:23:13.343976] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:18.461 [2024-10-01 20:23:13.343984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.461 [2024-10-01 20:23:13.343990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.461 20:23:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:18.461 20:23:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:30.662 20:23:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.662 20:23:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:30.662 20:23:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:30.662 20:23:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.662 20:23:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:30.662 20:23:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:25:30.662 20:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:25:30.662 [2024-10-01 20:23:25.742761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:25:30.662 [2024-10-01 20:23:25.743857] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:30.662 [2024-10-01 20:23:25.743902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.662 [2024-10-01 20:23:25.743913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.662 [2024-10-01 20:23:25.743932] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:30.662 [2024-10-01 20:23:25.743940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.662 [2024-10-01 20:23:25.743949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.662 [2024-10-01 20:23:25.743957] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:30.662 [2024-10-01 20:23:25.743965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.662 [2024-10-01 20:23:25.743972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.662 [2024-10-01 20:23:25.743981] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:30.662 [2024-10-01 20:23:25.743987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.662 [2024-10-01 20:23:25.743996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.228 [2024-10-01 20:23:26.142776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:25:31.228 [2024-10-01 20:23:26.143863] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:31.228 [2024-10-01 20:23:26.143896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.228 [2024-10-01 20:23:26.143908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.228 [2024-10-01 20:23:26.143924] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:31.228 [2024-10-01 20:23:26.143935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.228 [2024-10-01 20:23:26.143943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.228 [2024-10-01 20:23:26.143953] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:31.228 [2024-10-01 20:23:26.143960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.228 [2024-10-01 20:23:26.143969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.228 [2024-10-01 20:23:26.143976] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:31.228 [2024-10-01 20:23:26.143984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.228 [2024-10-01 20:23:26.143991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:31.228 20:23:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.228 20:23:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:31.228 20:23:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:31.228 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:25:31.486 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:25:31.486 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:31.486 20:23:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:25:43.694 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:25:43.694 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:25:43.694 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:25:43.694 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:43.694 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:43.694 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:43.694 20:23:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.694 20:23:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:43.694 20:23:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.694 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:25:43.694 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:43.694 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:43.694 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:43.694 [2024-10-01 20:23:38.542997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:25:43.694 [2024-10-01 20:23:38.544205] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:43.695 [2024-10-01 20:23:38.544240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.695 [2024-10-01 20:23:38.544252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.695 [2024-10-01 20:23:38.544271] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:43.695 [2024-10-01 20:23:38.544279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.695 [2024-10-01 20:23:38.544287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.695 [2024-10-01 20:23:38.544295] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:43.695 [2024-10-01 20:23:38.544307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.695 [2024-10-01 20:23:38.544313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.695 [2024-10-01 20:23:38.544322] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:43.695 [2024-10-01 20:23:38.544329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.695 [2024-10-01 20:23:38.544339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.695 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:43.695 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:43.695 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:25:43.695 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:43.695 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:43.695 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:43.695 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:43.695 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:43.695 20:23:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.695 20:23:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:43.695 20:23:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.695 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:25:43.695 20:23:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:25:43.953 [2024-10-01 20:23:39.043013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:25:43.953 [2024-10-01 20:23:39.044141] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:43.953 [2024-10-01 20:23:39.044178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.953 [2024-10-01 20:23:39.044191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.953 [2024-10-01 20:23:39.044207] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:43.953 [2024-10-01 20:23:39.044217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.953 [2024-10-01 20:23:39.044224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.953 [2024-10-01 20:23:39.044233] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:43.953 [2024-10-01 20:23:39.044240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.953 [2024-10-01 20:23:39.044249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.953 [2024-10-01 20:23:39.044257] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:43.953 [2024-10-01 20:23:39.044268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.953 [2024-10-01 20:23:39.044274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.953 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:25:43.953 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:43.953 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:43.953 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:43.953 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:43.953 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:43.953 20:23:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.953 20:23:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:43.953 20:23:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.953 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:25:43.953 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:25:44.211 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:44.211 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:44.211 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:25:44.211 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:25:44.211 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:44.211 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:44.211 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:44.211 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:25:44.211 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:25:44.211 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:44.211 20:23:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:25:56.403 20:23:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:25:56.403 20:23:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:25:56.403 20:23:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:25:56.403 20:23:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:56.403 20:23:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:56.403 20:23:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.403 20:23:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:25:56.403 20:23:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.68 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.68 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:25:56.403 20:23:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.68 00:25:56.403 20:23:51 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.68 2 00:25:56.403 remove_attach_helper took 44.68s to complete (handling 2 nvme drive(s)) 20:23:51 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:25:56.403 20:23:51 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67925 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 67925 ']' 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 67925 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67925 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:56.403 killing process with pid 67925 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67925' 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@969 -- # kill 67925 00:25:56.403 20:23:51 sw_hotplug -- common/autotest_common.sh@974 -- # wait 67925 00:25:58.296 20:23:53 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:58.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:58.858 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:58.859 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:58.859 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:58.859 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:58.859 00:25:58.859 real 2m29.756s 00:25:58.859 user 1m52.174s 00:25:58.859 sys 0m16.255s 00:25:58.859 20:23:53 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:58.859 20:23:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:58.859 ************************************ 00:25:58.859 END TEST sw_hotplug 00:25:58.859 ************************************ 00:25:58.859 20:23:53 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:25:58.859 20:23:53 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:25:58.859 20:23:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:58.859 20:23:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:58.859 20:23:53 -- common/autotest_common.sh@10 -- # set +x 00:25:58.859 ************************************ 00:25:58.859 START TEST nvme_xnvme 00:25:58.859 ************************************ 00:25:58.859 20:23:53 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:25:58.859 * Looking for test storage... 00:25:58.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:25:58.859 20:23:54 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:58.859 20:23:54 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:25:58.859 20:23:54 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:59.116 20:23:54 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:59.116 20:23:54 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.116 20:23:54 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.116 20:23:54 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.116 20:23:54 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.116 20:23:54 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.116 20:23:54 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.116 20:23:54 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.116 20:23:54 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:25:59.117 20:23:54 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.117 20:23:54 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:59.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.117 --rc genhtml_branch_coverage=1 00:25:59.117 --rc genhtml_function_coverage=1 00:25:59.117 --rc genhtml_legend=1 00:25:59.117 --rc geninfo_all_blocks=1 00:25:59.117 --rc geninfo_unexecuted_blocks=1 00:25:59.117 00:25:59.117 ' 00:25:59.117 20:23:54 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:59.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.117 --rc genhtml_branch_coverage=1 00:25:59.117 --rc genhtml_function_coverage=1 00:25:59.117 --rc genhtml_legend=1 00:25:59.117 --rc geninfo_all_blocks=1 00:25:59.117 --rc geninfo_unexecuted_blocks=1 00:25:59.117 00:25:59.117 ' 00:25:59.117 20:23:54 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:59.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.117 --rc genhtml_branch_coverage=1 00:25:59.117 --rc genhtml_function_coverage=1 00:25:59.117 --rc genhtml_legend=1 00:25:59.117 --rc geninfo_all_blocks=1 00:25:59.117 --rc geninfo_unexecuted_blocks=1 00:25:59.117 00:25:59.117 ' 00:25:59.117 20:23:54 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:59.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.117 --rc genhtml_branch_coverage=1 00:25:59.117 --rc genhtml_function_coverage=1 00:25:59.117 --rc genhtml_legend=1 00:25:59.117 --rc geninfo_all_blocks=1 00:25:59.117 --rc geninfo_unexecuted_blocks=1 00:25:59.117 00:25:59.117 ' 00:25:59.117 20:23:54 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.117 20:23:54 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.117 20:23:54 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.117 20:23:54 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.117 20:23:54 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.117 20:23:54 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:25:59.117 20:23:54 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.117 20:23:54 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:25:59.117 20:23:54 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:59.117 20:23:54 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:59.117 20:23:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:59.117 ************************************ 00:25:59.117 START TEST xnvme_to_malloc_dd_copy 00:25:59.117 ************************************ 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:25:59.117 20:23:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:25:59.117 { 00:25:59.117 "subsystems": [ 00:25:59.117 { 00:25:59.117 "subsystem": "bdev", 00:25:59.117 "config": [ 00:25:59.117 { 00:25:59.117 "params": { 00:25:59.117 "block_size": 512, 00:25:59.117 "num_blocks": 2097152, 00:25:59.117 "name": "malloc0" 00:25:59.117 }, 00:25:59.117 "method": "bdev_malloc_create" 00:25:59.117 }, 00:25:59.117 { 00:25:59.117 "params": { 00:25:59.117 "io_mechanism": "libaio", 00:25:59.117 "filename": "/dev/nullb0", 00:25:59.117 "name": "null0" 00:25:59.117 }, 00:25:59.117 "method": "bdev_xnvme_create" 00:25:59.117 }, 00:25:59.117 { 00:25:59.117 "method": "bdev_wait_for_examine" 00:25:59.117 } 00:25:59.117 ] 00:25:59.117 } 00:25:59.117 ] 00:25:59.117 } 00:25:59.117 [2024-10-01 20:23:54.231460] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:25:59.117 [2024-10-01 20:23:54.231587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69300 ] 00:25:59.375 [2024-10-01 20:23:54.381755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.375 [2024-10-01 20:23:54.581551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.401  Copying: 223/1024 [MB] (223 MBps) Copying: 446/1024 [MB] (222 MBps) Copying: 670/1024 [MB] (223 MBps) Copying: 944/1024 [MB] (274 MBps) Copying: 1024/1024 [MB] (average 238 MBps) 00:26:07.401 00:26:07.401 20:24:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:26:07.401 20:24:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:26:07.401 20:24:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:26:07.401 20:24:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:26:07.401 { 00:26:07.401 "subsystems": [ 00:26:07.401 { 00:26:07.401 "subsystem": "bdev", 00:26:07.401 "config": [ 00:26:07.401 { 00:26:07.401 "params": { 00:26:07.401 "block_size": 512, 00:26:07.401 "num_blocks": 2097152, 00:26:07.401 "name": "malloc0" 00:26:07.401 }, 00:26:07.401 "method": "bdev_malloc_create" 00:26:07.401 }, 00:26:07.401 { 00:26:07.401 "params": { 00:26:07.401 "io_mechanism": "libaio", 00:26:07.401 "filename": "/dev/nullb0", 00:26:07.401 "name": "null0" 00:26:07.401 }, 00:26:07.401 "method": "bdev_xnvme_create" 00:26:07.401 }, 00:26:07.401 { 00:26:07.401 "method": "bdev_wait_for_examine" 00:26:07.401 } 00:26:07.401 ] 00:26:07.401 } 00:26:07.401 ] 00:26:07.401 } 00:26:07.401 [2024-10-01 20:24:02.524298] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:26:07.401 [2024-10-01 20:24:02.524406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69400 ] 00:26:07.659 [2024-10-01 20:24:02.672568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.659 [2024-10-01 20:24:02.861893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.912  Copying: 224/1024 [MB] (224 MBps) Copying: 449/1024 [MB] (225 MBps) Copying: 674/1024 [MB] (224 MBps) Copying: 903/1024 [MB] (229 MBps) Copying: 1024/1024 [MB] (average 231 MBps) 00:26:15.912 00:26:15.912 20:24:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:26:15.912 20:24:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:26:15.912 20:24:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:26:15.912 20:24:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:26:15.912 20:24:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:26:15.912 20:24:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:26:15.912 { 00:26:15.912 "subsystems": [ 00:26:15.912 { 00:26:15.912 "subsystem": "bdev", 00:26:15.912 "config": [ 00:26:15.912 { 00:26:15.912 "params": { 00:26:15.912 "block_size": 512, 00:26:15.912 "num_blocks": 2097152, 00:26:15.912 "name": "malloc0" 00:26:15.912 }, 00:26:15.912 "method": "bdev_malloc_create" 00:26:15.912 }, 00:26:15.912 { 00:26:15.912 "params": { 00:26:15.912 "io_mechanism": "io_uring", 00:26:15.912 "filename": "/dev/nullb0", 00:26:15.912 "name": "null0" 00:26:15.912 }, 00:26:15.912 "method": "bdev_xnvme_create" 00:26:15.912 }, 00:26:15.912 { 00:26:15.912 "method": "bdev_wait_for_examine" 00:26:15.912 } 00:26:15.912 ] 00:26:15.912 } 00:26:15.912 ] 00:26:15.912 } 00:26:15.912 [2024-10-01 20:24:11.013915] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:26:15.912 [2024-10-01 20:24:11.014040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69498 ] 00:26:16.169 [2024-10-01 20:24:11.160560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.427 [2024-10-01 20:24:11.395364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.529  Copying: 229/1024 [MB] (229 MBps) Copying: 457/1024 [MB] (228 MBps) Copying: 687/1024 [MB] (229 MBps) Copying: 926/1024 [MB] (238 MBps) Copying: 1024/1024 [MB] (average 235 MBps) 00:26:24.529 00:26:24.529 20:24:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:26:24.529 20:24:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:26:24.529 20:24:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:26:24.529 20:24:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:26:24.529 { 00:26:24.529 "subsystems": [ 00:26:24.529 { 00:26:24.529 "subsystem": "bdev", 00:26:24.529 "config": [ 00:26:24.529 { 00:26:24.529 "params": { 00:26:24.529 "block_size": 512, 00:26:24.529 "num_blocks": 2097152, 00:26:24.529 "name": "malloc0" 00:26:24.529 }, 00:26:24.529 "method": "bdev_malloc_create" 00:26:24.529 }, 00:26:24.529 { 00:26:24.529 "params": { 00:26:24.529 "io_mechanism": "io_uring", 00:26:24.529 "filename": "/dev/nullb0", 00:26:24.529 "name": "null0" 00:26:24.529 }, 00:26:24.529 "method": "bdev_xnvme_create" 00:26:24.529 }, 00:26:24.529 { 00:26:24.529 "method": "bdev_wait_for_examine" 00:26:24.529 } 00:26:24.529 ] 00:26:24.529 } 00:26:24.529 ] 00:26:24.529 } 00:26:24.529 [2024-10-01 20:24:19.474485] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:26:24.529 [2024-10-01 20:24:19.474605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69596 ] 00:26:24.529 [2024-10-01 20:24:19.613792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.788 [2024-10-01 20:24:19.771457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.717  Copying: 297/1024 [MB] (297 MBps) Copying: 592/1024 [MB] (295 MBps) Copying: 887/1024 [MB] (294 MBps) Copying: 1024/1024 [MB] (average 295 MBps) 00:26:31.717 00:26:31.717 20:24:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:26:31.717 20:24:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:26:31.717 00:26:31.717 real 0m32.514s 00:26:31.717 user 0m28.809s 00:26:31.717 sys 0m3.127s 00:26:31.717 20:24:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:31.717 20:24:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:26:31.717 ************************************ 00:26:31.717 END TEST xnvme_to_malloc_dd_copy 00:26:31.717 ************************************ 00:26:31.717 20:24:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:26:31.717 20:24:26 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:31.717 20:24:26 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:31.717 20:24:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:31.717 ************************************ 00:26:31.717 START TEST xnvme_bdevperf 00:26:31.717 ************************************ 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:31.717 20:24:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:31.717 { 00:26:31.717 "subsystems": [ 00:26:31.717 { 00:26:31.717 "subsystem": "bdev", 00:26:31.717 "config": [ 00:26:31.717 { 00:26:31.718 "params": { 00:26:31.718 "io_mechanism": "libaio", 00:26:31.718 "filename": "/dev/nullb0", 00:26:31.718 "name": "null0" 00:26:31.718 }, 00:26:31.718 "method": "bdev_xnvme_create" 00:26:31.718 }, 00:26:31.718 { 00:26:31.718 "method": "bdev_wait_for_examine" 00:26:31.718 } 00:26:31.718 ] 00:26:31.718 } 00:26:31.718 ] 00:26:31.718 } 00:26:31.718 [2024-10-01 20:24:26.774375] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:26:31.718 [2024-10-01 20:24:26.774474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69706 ] 00:26:31.718 [2024-10-01 20:24:26.921136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.975 [2024-10-01 20:24:27.087047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.540 Running I/O for 5 seconds... 00:26:37.653 185280.00 IOPS, 723.75 MiB/s 188416.00 IOPS, 736.00 MiB/s 188266.67 IOPS, 735.42 MiB/s 188384.00 IOPS, 735.88 MiB/s 00:26:37.653 Latency(us) 00:26:37.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.653 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:26:37.653 null0 : 5.00 188217.23 735.22 0.00 0.00 337.54 131.54 1625.80 00:26:37.653 =================================================================================================================== 00:26:37.653 Total : 188217.23 735.22 0.00 0.00 337.54 131.54 1625.80 00:26:38.588 20:24:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:26:38.588 20:24:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:26:38.588 20:24:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:26:38.588 20:24:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:26:38.588 20:24:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:38.588 20:24:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:38.588 { 00:26:38.588 "subsystems": [ 00:26:38.588 { 00:26:38.588 "subsystem": "bdev", 00:26:38.588 "config": [ 00:26:38.588 { 00:26:38.588 "params": { 00:26:38.588 "io_mechanism": "io_uring", 00:26:38.588 "filename": "/dev/nullb0", 00:26:38.588 "name": "null0" 00:26:38.588 }, 00:26:38.588 "method": "bdev_xnvme_create" 00:26:38.588 }, 00:26:38.588 { 00:26:38.588 "method": "bdev_wait_for_examine" 00:26:38.588 } 00:26:38.588 ] 00:26:38.588 } 00:26:38.588 ] 00:26:38.588 } 00:26:38.588 [2024-10-01 20:24:33.556493] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:26:38.588 [2024-10-01 20:24:33.556618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69786 ] 00:26:38.588 [2024-10-01 20:24:33.705733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.844 [2024-10-01 20:24:33.855364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.102 Running I/O for 5 seconds... 00:26:44.207 215104.00 IOPS, 840.25 MiB/s 214560.00 IOPS, 838.12 MiB/s 213909.33 IOPS, 835.58 MiB/s 213648.00 IOPS, 834.56 MiB/s 00:26:44.207 Latency(us) 00:26:44.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.207 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:26:44.207 null0 : 5.00 213645.45 834.55 0.00 0.00 297.06 167.78 1638.40 00:26:44.207 =================================================================================================================== 00:26:44.207 Total : 213645.45 834.55 0.00 0.00 297.06 167.78 1638.40 00:26:45.140 20:24:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:26:45.140 20:24:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:26:45.140 00:26:45.140 real 0m13.544s 00:26:45.140 user 0m11.012s 00:26:45.140 sys 0m2.271s 00:26:45.140 ************************************ 00:26:45.140 END TEST xnvme_bdevperf 00:26:45.140 ************************************ 00:26:45.140 20:24:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:45.140 20:24:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.140 00:26:45.140 real 0m46.280s 00:26:45.140 user 0m39.932s 00:26:45.140 sys 0m5.513s 00:26:45.140 20:24:40 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:45.140 20:24:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:45.140 ************************************ 00:26:45.140 END TEST nvme_xnvme 00:26:45.140 ************************************ 00:26:45.140 20:24:40 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:26:45.140 20:24:40 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:45.141 20:24:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:45.141 20:24:40 -- common/autotest_common.sh@10 -- # set +x 00:26:45.141 ************************************ 00:26:45.141 START TEST blockdev_xnvme 00:26:45.141 ************************************ 00:26:45.141 20:24:40 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:26:45.400 * Looking for test storage... 00:26:45.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:26:45.400 20:24:40 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:45.400 20:24:40 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:26:45.400 20:24:40 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:45.400 20:24:40 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:45.400 20:24:40 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:26:45.400 20:24:40 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:45.400 20:24:40 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:45.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.400 --rc genhtml_branch_coverage=1 00:26:45.400 --rc genhtml_function_coverage=1 00:26:45.400 --rc genhtml_legend=1 00:26:45.400 --rc geninfo_all_blocks=1 00:26:45.400 --rc geninfo_unexecuted_blocks=1 00:26:45.400 00:26:45.400 ' 00:26:45.400 20:24:40 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:45.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.400 --rc genhtml_branch_coverage=1 00:26:45.400 --rc genhtml_function_coverage=1 00:26:45.400 --rc genhtml_legend=1 00:26:45.400 --rc geninfo_all_blocks=1 00:26:45.400 --rc geninfo_unexecuted_blocks=1 00:26:45.400 00:26:45.400 ' 00:26:45.400 20:24:40 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:45.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.400 --rc genhtml_branch_coverage=1 00:26:45.400 --rc genhtml_function_coverage=1 00:26:45.400 --rc genhtml_legend=1 00:26:45.400 --rc geninfo_all_blocks=1 00:26:45.400 --rc geninfo_unexecuted_blocks=1 00:26:45.400 00:26:45.400 ' 00:26:45.400 20:24:40 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:45.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.400 --rc genhtml_branch_coverage=1 00:26:45.400 --rc genhtml_function_coverage=1 00:26:45.400 --rc genhtml_legend=1 00:26:45.400 --rc geninfo_all_blocks=1 00:26:45.400 --rc geninfo_unexecuted_blocks=1 00:26:45.400 00:26:45.400 ' 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:26:45.400 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:26:45.401 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:26:45.401 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:26:45.401 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:26:45.401 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:26:45.401 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:26:45.401 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:26:45.401 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:26:45.401 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:26:45.401 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69935 00:26:45.401 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:45.401 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69935 00:26:45.401 20:24:40 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 69935 ']' 00:26:45.401 20:24:40 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.401 20:24:40 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:45.401 20:24:40 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:45.401 20:24:40 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.401 20:24:40 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:45.401 20:24:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:45.401 [2024-10-01 20:24:40.523139] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:26:45.401 [2024-10-01 20:24:40.523334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69935 ] 00:26:45.659 [2024-10-01 20:24:40.668278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.659 [2024-10-01 20:24:40.821895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.591 20:24:41 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:46.591 20:24:41 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:26:46.591 20:24:41 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:26:46.591 20:24:41 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:26:46.591 20:24:41 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:26:46.591 20:24:41 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:26:46.591 20:24:41 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:46.591 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:46.848 Waiting for block devices as requested 00:26:46.848 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:46.849 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:46.849 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:26:47.106 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:26:52.440 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:26:52.440 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:52.440 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:26:52.441 nvme0n1 00:26:52.441 nvme1n1 00:26:52.441 nvme2n1 00:26:52.441 nvme2n2 00:26:52.441 nvme2n3 00:26:52.441 nvme3n1 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:52.441 20:24:47 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "697ac085-9388-4b74-82ee-39d2adf58fe7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "697ac085-9388-4b74-82ee-39d2adf58fe7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b12ba895-8322-416c-94ed-bae309fbc18a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b12ba895-8322-416c-94ed-bae309fbc18a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "975ee150-4b2c-4e16-803a-d2ad1cb780f6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "975ee150-4b2c-4e16-803a-d2ad1cb780f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "6cd65425-c9cd-41b6-a8bb-eca2cfa85188"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6cd65425-c9cd-41b6-a8bb-eca2cfa85188",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "0cbb6aa3-4f80-43a7-bdb2-8fcaadc21468"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0cbb6aa3-4f80-43a7-bdb2-8fcaadc21468",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "3cfafcc8-71e8-4906-8ced-4b5d6881c41c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3cfafcc8-71e8-4906-8ced-4b5d6881c41c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:26:52.441 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:26:52.442 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:26:52.442 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:26:52.442 20:24:47 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69935 00:26:52.442 20:24:47 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 69935 ']' 00:26:52.442 20:24:47 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 69935 00:26:52.442 20:24:47 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:26:52.442 20:24:47 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:52.442 20:24:47 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69935 00:26:52.442 killing process with pid 69935 00:26:52.442 20:24:47 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:52.442 20:24:47 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:52.442 20:24:47 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69935' 00:26:52.442 20:24:47 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 69935 00:26:52.442 20:24:47 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 69935 00:26:53.813 20:24:49 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:53.813 20:24:49 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:26:53.813 20:24:49 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:26:53.813 20:24:49 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:53.813 20:24:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:54.070 ************************************ 00:26:54.070 START TEST bdev_hello_world 00:26:54.070 ************************************ 00:26:54.070 20:24:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:26:54.070 [2024-10-01 20:24:49.086780] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:26:54.070 [2024-10-01 20:24:49.086907] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70294 ] 00:26:54.070 [2024-10-01 20:24:49.236549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.328 [2024-10-01 20:24:49.392420] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.893 [2024-10-01 20:24:49.844151] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:54.893 [2024-10-01 20:24:49.844201] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:26:54.893 [2024-10-01 20:24:49.844216] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:54.893 [2024-10-01 20:24:49.845821] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:54.893 [2024-10-01 20:24:49.846008] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:54.893 [2024-10-01 20:24:49.846025] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:54.893 [2024-10-01 20:24:49.846117] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:54.893 00:26:54.893 [2024-10-01 20:24:49.846131] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:55.853 00:26:55.853 real 0m1.829s 00:26:55.853 user 0m1.482s 00:26:55.853 sys 0m0.224s 00:26:55.853 ************************************ 00:26:55.853 END TEST bdev_hello_world 00:26:55.853 ************************************ 00:26:55.853 20:24:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:55.853 20:24:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:26:55.853 20:24:50 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:26:55.853 20:24:50 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:55.853 20:24:50 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:55.853 20:24:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:55.854 ************************************ 00:26:55.854 START TEST bdev_bounds 00:26:55.854 ************************************ 00:26:55.854 Process bdevio pid: 70336 00:26:55.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.854 20:24:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:26:55.854 20:24:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=70336 00:26:55.854 20:24:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:55.854 20:24:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 70336' 00:26:55.854 20:24:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 70336 00:26:55.854 20:24:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 70336 ']' 00:26:55.854 20:24:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.854 20:24:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:55.854 20:24:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.854 20:24:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:55.854 20:24:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:55.854 20:24:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:55.854 [2024-10-01 20:24:50.950672] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:26:55.854 [2024-10-01 20:24:50.950806] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70336 ] 00:26:56.112 [2024-10-01 20:24:51.100818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:56.112 [2024-10-01 20:24:51.258462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.112 [2024-10-01 20:24:51.258806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:56.112 [2024-10-01 20:24:51.258958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.678 20:24:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:56.678 20:24:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:26:56.678 20:24:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:56.678 I/O targets: 00:26:56.678 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:26:56.678 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:26:56.678 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:26:56.678 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:26:56.678 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:26:56.678 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:26:56.678 00:26:56.678 00:26:56.678 CUnit - A unit testing framework for C - Version 2.1-3 00:26:56.678 http://cunit.sourceforge.net/ 00:26:56.678 00:26:56.678 00:26:56.678 Suite: bdevio tests on: nvme3n1 00:26:56.678 Test: blockdev write read block ...passed 00:26:56.678 Test: blockdev write zeroes read block ...passed 00:26:56.678 Test: blockdev write zeroes read no split ...passed 00:26:56.678 Test: blockdev write zeroes read split ...passed 00:26:56.936 Test: blockdev write zeroes read split partial ...passed 00:26:56.936 Test: blockdev reset ...passed 00:26:56.936 Test: blockdev write read 8 blocks ...passed 00:26:56.936 Test: blockdev write read size > 128k ...passed 00:26:56.936 Test: blockdev write read invalid size ...passed 00:26:56.936 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:56.936 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:56.936 Test: blockdev write read max offset ...passed 00:26:56.936 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:56.936 Test: blockdev writev readv 8 blocks ...passed 00:26:56.936 Test: blockdev writev readv 30 x 1block ...passed 00:26:56.936 Test: blockdev writev readv block ...passed 00:26:56.936 Test: blockdev writev readv size > 128k ...passed 00:26:56.936 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:56.936 Test: blockdev comparev and writev ...passed 00:26:56.936 Test: blockdev nvme passthru rw ...passed 00:26:56.936 Test: blockdev nvme passthru vendor specific ...passed 00:26:56.936 Test: blockdev nvme admin passthru ...passed 00:26:56.936 Test: blockdev copy ...passed 00:26:56.936 Suite: bdevio tests on: nvme2n3 00:26:56.936 Test: blockdev write read block ...passed 00:26:56.936 Test: blockdev write zeroes read block ...passed 00:26:56.936 Test: blockdev write zeroes read no split ...passed 00:26:56.936 Test: blockdev write zeroes read split ...passed 00:26:56.936 Test: blockdev write zeroes read split partial ...passed 00:26:56.936 Test: blockdev reset ...passed 00:26:56.936 Test: blockdev write read 8 blocks ...passed 00:26:56.936 Test: blockdev write read size > 128k ...passed 00:26:56.936 Test: blockdev write read invalid size ...passed 00:26:56.936 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:56.936 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:56.936 Test: blockdev write read max offset ...passed 00:26:56.936 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:56.936 Test: blockdev writev readv 8 blocks ...passed 00:26:56.936 Test: blockdev writev readv 30 x 1block ...passed 00:26:56.936 Test: blockdev writev readv block ...passed 00:26:56.936 Test: blockdev writev readv size > 128k ...passed 00:26:56.936 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:56.936 Test: blockdev comparev and writev ...passed 00:26:56.936 Test: blockdev nvme passthru rw ...passed 00:26:56.936 Test: blockdev nvme passthru vendor specific ...passed 00:26:56.936 Test: blockdev nvme admin passthru ...passed 00:26:56.936 Test: blockdev copy ...passed 00:26:56.936 Suite: bdevio tests on: nvme2n2 00:26:56.936 Test: blockdev write read block ...passed 00:26:56.936 Test: blockdev write zeroes read block ...passed 00:26:56.936 Test: blockdev write zeroes read no split ...passed 00:26:56.936 Test: blockdev write zeroes read split ...passed 00:26:56.936 Test: blockdev write zeroes read split partial ...passed 00:26:56.936 Test: blockdev reset ...passed 00:26:56.936 Test: blockdev write read 8 blocks ...passed 00:26:56.936 Test: blockdev write read size > 128k ...passed 00:26:56.936 Test: blockdev write read invalid size ...passed 00:26:56.936 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:56.936 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:56.936 Test: blockdev write read max offset ...passed 00:26:56.936 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:56.936 Test: blockdev writev readv 8 blocks ...passed 00:26:56.936 Test: blockdev writev readv 30 x 1block ...passed 00:26:56.936 Test: blockdev writev readv block ...passed 00:26:56.936 Test: blockdev writev readv size > 128k ...passed 00:26:56.936 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:56.936 Test: blockdev comparev and writev ...passed 00:26:56.936 Test: blockdev nvme passthru rw ...passed 00:26:56.936 Test: blockdev nvme passthru vendor specific ...passed 00:26:56.936 Test: blockdev nvme admin passthru ...passed 00:26:56.936 Test: blockdev copy ...passed 00:26:56.936 Suite: bdevio tests on: nvme2n1 00:26:56.936 Test: blockdev write read block ...passed 00:26:56.936 Test: blockdev write zeroes read block ...passed 00:26:56.936 Test: blockdev write zeroes read no split ...passed 00:26:56.936 Test: blockdev write zeroes read split ...passed 00:26:56.936 Test: blockdev write zeroes read split partial ...passed 00:26:56.936 Test: blockdev reset ...passed 00:26:56.936 Test: blockdev write read 8 blocks ...passed 00:26:56.936 Test: blockdev write read size > 128k ...passed 00:26:56.936 Test: blockdev write read invalid size ...passed 00:26:56.936 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:56.936 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:56.936 Test: blockdev write read max offset ...passed 00:26:56.936 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:56.936 Test: blockdev writev readv 8 blocks ...passed 00:26:56.936 Test: blockdev writev readv 30 x 1block ...passed 00:26:56.936 Test: blockdev writev readv block ...passed 00:26:56.936 Test: blockdev writev readv size > 128k ...passed 00:26:56.936 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:56.936 Test: blockdev comparev and writev ...passed 00:26:56.936 Test: blockdev nvme passthru rw ...passed 00:26:56.936 Test: blockdev nvme passthru vendor specific ...passed 00:26:56.936 Test: blockdev nvme admin passthru ...passed 00:26:56.936 Test: blockdev copy ...passed 00:26:56.936 Suite: bdevio tests on: nvme1n1 00:26:56.936 Test: blockdev write read block ...passed 00:26:56.936 Test: blockdev write zeroes read block ...passed 00:26:56.936 Test: blockdev write zeroes read no split ...passed 00:26:56.936 Test: blockdev write zeroes read split ...passed 00:26:56.936 Test: blockdev write zeroes read split partial ...passed 00:26:56.936 Test: blockdev reset ...passed 00:26:56.936 Test: blockdev write read 8 blocks ...passed 00:26:56.936 Test: blockdev write read size > 128k ...passed 00:26:56.936 Test: blockdev write read invalid size ...passed 00:26:56.936 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:56.936 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:56.936 Test: blockdev write read max offset ...passed 00:26:56.936 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:56.936 Test: blockdev writev readv 8 blocks ...passed 00:26:56.936 Test: blockdev writev readv 30 x 1block ...passed 00:26:56.936 Test: blockdev writev readv block ...passed 00:26:56.936 Test: blockdev writev readv size > 128k ...passed 00:26:56.936 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:56.937 Test: blockdev comparev and writev ...passed 00:26:56.937 Test: blockdev nvme passthru rw ...passed 00:26:56.937 Test: blockdev nvme passthru vendor specific ...passed 00:26:56.937 Test: blockdev nvme admin passthru ...passed 00:26:56.937 Test: blockdev copy ...passed 00:26:56.937 Suite: bdevio tests on: nvme0n1 00:26:56.937 Test: blockdev write read block ...passed 00:26:56.937 Test: blockdev write zeroes read block ...passed 00:26:56.937 Test: blockdev write zeroes read no split ...passed 00:26:57.196 Test: blockdev write zeroes read split ...passed 00:26:57.196 Test: blockdev write zeroes read split partial ...passed 00:26:57.196 Test: blockdev reset ...passed 00:26:57.196 Test: blockdev write read 8 blocks ...passed 00:26:57.196 Test: blockdev write read size > 128k ...passed 00:26:57.196 Test: blockdev write read invalid size ...passed 00:26:57.196 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:57.196 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:57.196 Test: blockdev write read max offset ...passed 00:26:57.196 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:57.196 Test: blockdev writev readv 8 blocks ...passed 00:26:57.196 Test: blockdev writev readv 30 x 1block ...passed 00:26:57.196 Test: blockdev writev readv block ...passed 00:26:57.196 Test: blockdev writev readv size > 128k ...passed 00:26:57.196 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:57.196 Test: blockdev comparev and writev ...passed 00:26:57.196 Test: blockdev nvme passthru rw ...passed 00:26:57.196 Test: blockdev nvme passthru vendor specific ...passed 00:26:57.196 Test: blockdev nvme admin passthru ...passed 00:26:57.196 Test: blockdev copy ...passed 00:26:57.196 00:26:57.196 Run Summary: Type Total Ran Passed Failed Inactive 00:26:57.196 suites 6 6 n/a 0 0 00:26:57.196 tests 138 138 138 0 0 00:26:57.196 asserts 780 780 780 0 n/a 00:26:57.196 00:26:57.196 Elapsed time = 0.912 seconds 00:26:57.196 0 00:26:57.196 20:24:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 70336 00:26:57.196 20:24:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 70336 ']' 00:26:57.196 20:24:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 70336 00:26:57.196 20:24:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:26:57.196 20:24:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:57.196 20:24:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70336 00:26:57.196 20:24:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:57.196 20:24:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:57.196 20:24:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70336' 00:26:57.196 killing process with pid 70336 00:26:57.196 20:24:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 70336 00:26:57.196 20:24:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 70336 00:26:58.130 20:24:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:26:58.130 00:26:58.130 real 0m2.357s 00:26:58.130 user 0m5.738s 00:26:58.130 sys 0m0.348s 00:26:58.130 20:24:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:58.130 20:24:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:58.130 ************************************ 00:26:58.130 END TEST bdev_bounds 00:26:58.130 ************************************ 00:26:58.130 20:24:53 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:26:58.130 20:24:53 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:58.130 20:24:53 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:58.130 20:24:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:58.130 ************************************ 00:26:58.130 START TEST bdev_nbd 00:26:58.130 ************************************ 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:26:58.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=70390 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 70390 /var/tmp/spdk-nbd.sock 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 70390 ']' 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:58.130 20:24:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:58.390 [2024-10-01 20:24:53.357269] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:26:58.390 [2024-10-01 20:24:53.357566] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.390 [2024-10-01 20:24:53.507170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.648 [2024-10-01 20:24:53.664192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:59.213 1+0 records in 00:26:59.213 1+0 records out 00:26:59.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000784857 s, 5.2 MB/s 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:26:59.213 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:59.471 1+0 records in 00:26:59.471 1+0 records out 00:26:59.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341327 s, 12.0 MB/s 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:26:59.471 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:59.730 1+0 records in 00:26:59.730 1+0 records out 00:26:59.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041486 s, 9.9 MB/s 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:26:59.730 20:24:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:59.988 1+0 records in 00:26:59.988 1+0 records out 00:26:59.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312762 s, 13.1 MB/s 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:26:59.988 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:00.246 1+0 records in 00:27:00.246 1+0 records out 00:27:00.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661014 s, 6.2 MB/s 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:27:00.246 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:00.247 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:00.247 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:00.504 1+0 records in 00:27:00.504 1+0 records out 00:27:00.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474247 s, 8.6 MB/s 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:00.504 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:00.761 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:00.761 { 00:27:00.761 "nbd_device": "/dev/nbd0", 00:27:00.761 "bdev_name": "nvme0n1" 00:27:00.761 }, 00:27:00.761 { 00:27:00.761 "nbd_device": "/dev/nbd1", 00:27:00.761 "bdev_name": "nvme1n1" 00:27:00.761 }, 00:27:00.761 { 00:27:00.761 "nbd_device": "/dev/nbd2", 00:27:00.761 "bdev_name": "nvme2n1" 00:27:00.761 }, 00:27:00.761 { 00:27:00.761 "nbd_device": "/dev/nbd3", 00:27:00.761 "bdev_name": "nvme2n2" 00:27:00.761 }, 00:27:00.761 { 00:27:00.761 "nbd_device": "/dev/nbd4", 00:27:00.761 "bdev_name": "nvme2n3" 00:27:00.761 }, 00:27:00.761 { 00:27:00.761 "nbd_device": "/dev/nbd5", 00:27:00.761 "bdev_name": "nvme3n1" 00:27:00.761 } 00:27:00.761 ]' 00:27:00.761 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:00.761 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:00.761 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:00.761 { 00:27:00.761 "nbd_device": "/dev/nbd0", 00:27:00.761 "bdev_name": "nvme0n1" 00:27:00.761 }, 00:27:00.761 { 00:27:00.761 "nbd_device": "/dev/nbd1", 00:27:00.761 "bdev_name": "nvme1n1" 00:27:00.761 }, 00:27:00.761 { 00:27:00.761 "nbd_device": "/dev/nbd2", 00:27:00.761 "bdev_name": "nvme2n1" 00:27:00.761 }, 00:27:00.761 { 00:27:00.761 "nbd_device": "/dev/nbd3", 00:27:00.761 "bdev_name": "nvme2n2" 00:27:00.761 }, 00:27:00.761 { 00:27:00.761 "nbd_device": "/dev/nbd4", 00:27:00.761 "bdev_name": "nvme2n3" 00:27:00.761 }, 00:27:00.761 { 00:27:00.761 "nbd_device": "/dev/nbd5", 00:27:00.762 "bdev_name": "nvme3n1" 00:27:00.762 } 00:27:00.762 ]' 00:27:00.762 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:27:00.762 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:00.762 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:27:00.762 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:00.762 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:00.762 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:00.762 20:24:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:01.020 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:01.020 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:01.020 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:01.020 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:01.020 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:01.020 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:01.020 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:01.020 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:01.020 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:01.020 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:01.278 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:01.278 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:01.278 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:01.278 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:01.278 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:01.278 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:01.278 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:01.278 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:01.278 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:01.278 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:27:01.536 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:27:01.536 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:27:01.536 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:27:01.536 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:01.536 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:01.536 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:27:01.536 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:01.536 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:01.536 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:01.536 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:27:01.794 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:27:01.794 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:27:01.794 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:27:01.794 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:01.794 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:01.794 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:27:01.794 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:01.794 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:01.794 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:01.794 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:27:01.794 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:27:01.794 20:24:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:27:01.794 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:27:01.795 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:01.795 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:01.795 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:02.052 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:02.053 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:02.311 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:27:02.568 /dev/nbd0 00:27:02.568 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:02.568 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:02.568 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:02.568 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:27:02.568 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:02.568 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:02.569 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:02.569 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:27:02.569 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:02.569 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:02.569 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:02.569 1+0 records in 00:27:02.569 1+0 records out 00:27:02.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496292 s, 8.3 MB/s 00:27:02.569 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:02.569 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:27:02.569 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:02.569 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:02.569 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:27:02.569 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:02.569 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:02.569 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:27:02.826 /dev/nbd1 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:02.826 1+0 records in 00:27:02.826 1+0 records out 00:27:02.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532801 s, 7.7 MB/s 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:02.826 20:24:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:27:03.084 /dev/nbd10 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:03.084 1+0 records in 00:27:03.084 1+0 records out 00:27:03.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336247 s, 12.2 MB/s 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:03.084 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:27:03.441 /dev/nbd11 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:03.441 1+0 records in 00:27:03.441 1+0 records out 00:27:03.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326044 s, 12.6 MB/s 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:03.441 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:27:03.701 /dev/nbd12 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:03.701 1+0 records in 00:27:03.701 1+0 records out 00:27:03.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352556 s, 11.6 MB/s 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:03.701 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:27:03.701 /dev/nbd13 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:03.961 1+0 records in 00:27:03.961 1+0 records out 00:27:03.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436188 s, 9.4 MB/s 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:03.961 20:24:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:03.961 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:03.961 { 00:27:03.961 "nbd_device": "/dev/nbd0", 00:27:03.961 "bdev_name": "nvme0n1" 00:27:03.961 }, 00:27:03.961 { 00:27:03.961 "nbd_device": "/dev/nbd1", 00:27:03.961 "bdev_name": "nvme1n1" 00:27:03.961 }, 00:27:03.961 { 00:27:03.961 "nbd_device": "/dev/nbd10", 00:27:03.961 "bdev_name": "nvme2n1" 00:27:03.961 }, 00:27:03.961 { 00:27:03.961 "nbd_device": "/dev/nbd11", 00:27:03.961 "bdev_name": "nvme2n2" 00:27:03.961 }, 00:27:03.961 { 00:27:03.961 "nbd_device": "/dev/nbd12", 00:27:03.961 "bdev_name": "nvme2n3" 00:27:03.961 }, 00:27:03.961 { 00:27:03.961 "nbd_device": "/dev/nbd13", 00:27:03.961 "bdev_name": "nvme3n1" 00:27:03.961 } 00:27:03.961 ]' 00:27:03.961 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:03.961 { 00:27:03.961 "nbd_device": "/dev/nbd0", 00:27:03.961 "bdev_name": "nvme0n1" 00:27:03.961 }, 00:27:03.961 { 00:27:03.961 "nbd_device": "/dev/nbd1", 00:27:03.961 "bdev_name": "nvme1n1" 00:27:03.961 }, 00:27:03.961 { 00:27:03.961 "nbd_device": "/dev/nbd10", 00:27:03.961 "bdev_name": "nvme2n1" 00:27:03.961 }, 00:27:03.961 { 00:27:03.961 "nbd_device": "/dev/nbd11", 00:27:03.961 "bdev_name": "nvme2n2" 00:27:03.961 }, 00:27:03.961 { 00:27:03.961 "nbd_device": "/dev/nbd12", 00:27:03.961 "bdev_name": "nvme2n3" 00:27:03.961 }, 00:27:03.961 { 00:27:03.961 "nbd_device": "/dev/nbd13", 00:27:03.961 "bdev_name": "nvme3n1" 00:27:03.961 } 00:27:03.961 ]' 00:27:03.961 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:04.222 /dev/nbd1 00:27:04.222 /dev/nbd10 00:27:04.222 /dev/nbd11 00:27:04.222 /dev/nbd12 00:27:04.222 /dev/nbd13' 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:04.222 /dev/nbd1 00:27:04.222 /dev/nbd10 00:27:04.222 /dev/nbd11 00:27:04.222 /dev/nbd12 00:27:04.222 /dev/nbd13' 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:04.222 256+0 records in 00:27:04.222 256+0 records out 00:27:04.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00991361 s, 106 MB/s 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:04.222 256+0 records in 00:27:04.222 256+0 records out 00:27:04.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.049871 s, 21.0 MB/s 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:04.222 256+0 records in 00:27:04.222 256+0 records out 00:27:04.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0629994 s, 16.6 MB/s 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:27:04.222 256+0 records in 00:27:04.222 256+0 records out 00:27:04.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0718903 s, 14.6 MB/s 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:04.222 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:27:04.481 256+0 records in 00:27:04.481 256+0 records out 00:27:04.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0849154 s, 12.3 MB/s 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:27:04.481 256+0 records in 00:27:04.481 256+0 records out 00:27:04.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0707861 s, 14.8 MB/s 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:27:04.481 256+0 records in 00:27:04.481 256+0 records out 00:27:04.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0600446 s, 17.5 MB/s 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:04.481 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:05.046 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:05.046 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:05.046 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:05.046 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:05.046 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:05.046 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:05.046 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:05.046 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:05.046 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:05.046 20:24:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:05.046 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:05.046 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:05.046 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:05.046 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:05.046 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:05.046 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:05.046 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:05.046 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:05.046 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:05.046 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:27:05.303 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:27:05.303 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:27:05.303 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:27:05.303 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:05.303 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:05.303 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:27:05.303 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:05.303 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:05.303 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:05.303 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:05.561 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:27:05.818 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:27:05.818 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:27:05.818 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:27:05.818 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:05.818 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:05.818 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:27:05.818 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:05.818 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:05.818 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:05.818 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:05.818 20:25:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:27:06.076 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:06.334 malloc_lvol_verify 00:27:06.334 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:06.592 8bf32903-9fb3-4b33-a19a-fcff261c85aa 00:27:06.592 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:06.592 6d23b62f-5e8e-4a77-9457-41c01cce8e2c 00:27:06.850 20:25:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:06.850 /dev/nbd0 00:27:06.850 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:27:06.850 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:27:06.850 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:27:06.850 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:27:06.850 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:27:06.850 mke2fs 1.47.0 (5-Feb-2023) 00:27:06.850 Discarding device blocks: 0/4096 done 00:27:06.850 Creating filesystem with 4096 1k blocks and 1024 inodes 00:27:06.850 00:27:06.850 Allocating group tables: 0/1 done 00:27:06.850 Writing inode tables: 0/1 done 00:27:06.850 Creating journal (1024 blocks): done 00:27:06.850 Writing superblocks and filesystem accounting information: 0/1 done 00:27:06.850 00:27:06.850 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:06.850 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:06.850 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:06.850 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:06.850 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:06.850 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:06.850 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 70390 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 70390 ']' 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 70390 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70390 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:07.108 killing process with pid 70390 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70390' 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 70390 00:27:07.108 20:25:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 70390 00:27:08.481 20:25:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:27:08.481 00:27:08.481 real 0m10.175s 00:27:08.481 user 0m14.231s 00:27:08.481 sys 0m3.282s 00:27:08.481 20:25:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:08.481 20:25:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:27:08.481 ************************************ 00:27:08.481 END TEST bdev_nbd 00:27:08.481 ************************************ 00:27:08.481 20:25:03 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:27:08.481 20:25:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:27:08.481 20:25:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:27:08.481 20:25:03 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:27:08.481 20:25:03 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:08.481 20:25:03 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:08.481 20:25:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:08.481 ************************************ 00:27:08.481 START TEST bdev_fio 00:27:08.481 ************************************ 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:27:08.481 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:08.481 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:08.482 ************************************ 00:27:08.482 START TEST bdev_fio_rw_verify 00:27:08.482 ************************************ 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:08.482 20:25:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:08.740 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:08.740 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:08.740 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:08.740 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:08.740 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:08.740 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:08.740 fio-3.35 00:27:08.740 Starting 6 threads 00:27:21.014 00:27:21.014 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70795: Tue Oct 1 20:25:14 2024 00:27:21.014 read: IOPS=42.8k, BW=167MiB/s (175MB/s)(1672MiB/10001msec) 00:27:21.014 slat (usec): min=2, max=614, avg= 4.68, stdev= 3.06 00:27:21.014 clat (usec): min=56, max=406199, avg=388.73, stdev=922.71 00:27:21.014 lat (usec): min=60, max=406203, avg=393.41, stdev=922.80 00:27:21.014 clat percentiles (usec): 00:27:21.014 | 50.000th=[ 359], 99.000th=[ 979], 99.900th=[ 1500], 99.990th=[ 3654], 00:27:21.014 | 99.999th=[57934] 00:27:21.014 write: IOPS=43.1k, BW=169MiB/s (177MB/s)(1685MiB/10001msec); 0 zone resets 00:27:21.014 slat (usec): min=4, max=5781, avg=20.32, stdev=27.36 00:27:21.014 clat (usec): min=61, max=464216, avg=520.66, stdev=4435.71 00:27:21.014 lat (usec): min=77, max=464230, avg=540.98, stdev=4435.94 00:27:21.014 clat percentiles (usec): 00:27:21.014 | 50.000th=[ 437], 99.000th=[ 1074], 99.900th=[ 1778], 00:27:21.014 | 99.990th=[400557], 99.999th=[463471] 00:27:21.014 bw ( KiB/s): min=84368, max=198824, per=99.99%, avg=172560.47, stdev=4788.68, samples=114 00:27:21.014 iops : min=21092, max=49706, avg=43139.95, stdev=1197.16, samples=114 00:27:21.014 lat (usec) : 100=0.09%, 250=19.00%, 500=51.15%, 750=23.11%, 1000=5.38% 00:27:21.014 lat (msec) : 2=1.21%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01% 00:27:21.014 lat (msec) : 100=0.01%, 500=0.01% 00:27:21.014 cpu : usr=58.82%, sys=26.10%, ctx=10227, majf=0, minf=34182 00:27:21.014 IO depths : 1=12.3%, 2=24.8%, 4=50.2%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:21.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.014 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.014 issued rwts: total=427985,431483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:21.014 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:21.014 00:27:21.014 Run status group 0 (all jobs): 00:27:21.014 READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=1672MiB (1753MB), run=10001-10001msec 00:27:21.014 WRITE: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=1685MiB (1767MB), run=10001-10001msec 00:27:21.014 ----------------------------------------------------- 00:27:21.014 Suppressions used: 00:27:21.014 count bytes template 00:27:21.014 6 48 /usr/src/fio/parse.c 00:27:21.014 3197 306912 /usr/src/fio/iolog.c 00:27:21.014 1 8 libtcmalloc_minimal.so 00:27:21.014 1 904 libcrypto.so 00:27:21.014 ----------------------------------------------------- 00:27:21.014 00:27:21.014 00:27:21.014 real 0m12.408s 00:27:21.014 user 0m37.212s 00:27:21.014 sys 0m16.006s 00:27:21.014 20:25:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:21.014 20:25:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:27:21.014 ************************************ 00:27:21.014 END TEST bdev_fio_rw_verify 00:27:21.014 ************************************ 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:27:21.014 20:25:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:27:21.015 20:25:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "697ac085-9388-4b74-82ee-39d2adf58fe7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "697ac085-9388-4b74-82ee-39d2adf58fe7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b12ba895-8322-416c-94ed-bae309fbc18a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b12ba895-8322-416c-94ed-bae309fbc18a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "975ee150-4b2c-4e16-803a-d2ad1cb780f6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "975ee150-4b2c-4e16-803a-d2ad1cb780f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "6cd65425-c9cd-41b6-a8bb-eca2cfa85188"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6cd65425-c9cd-41b6-a8bb-eca2cfa85188",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "0cbb6aa3-4f80-43a7-bdb2-8fcaadc21468"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0cbb6aa3-4f80-43a7-bdb2-8fcaadc21468",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "3cfafcc8-71e8-4906-8ced-4b5d6881c41c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3cfafcc8-71e8-4906-8ced-4b5d6881c41c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:27:21.015 20:25:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:27:21.015 20:25:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:21.015 20:25:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:27:21.015 /home/vagrant/spdk_repo/spdk 00:27:21.015 20:25:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:27:21.015 20:25:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:27:21.015 00:27:21.015 real 0m12.563s 00:27:21.015 user 0m37.294s 00:27:21.015 sys 0m16.071s 00:27:21.015 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:21.015 20:25:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:21.015 ************************************ 00:27:21.015 END TEST bdev_fio 00:27:21.015 ************************************ 00:27:21.015 20:25:16 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:21.015 20:25:16 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:21.015 20:25:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:27:21.015 20:25:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:21.015 20:25:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:21.015 ************************************ 00:27:21.015 START TEST bdev_verify 00:27:21.015 ************************************ 00:27:21.015 20:25:16 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:21.015 [2024-10-01 20:25:16.175571] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:27:21.015 [2024-10-01 20:25:16.175718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70975 ] 00:27:21.273 [2024-10-01 20:25:16.327212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:21.531 [2024-10-01 20:25:16.512220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.531 [2024-10-01 20:25:16.512376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.097 Running I/O for 5 seconds... 00:27:27.224 20832.00 IOPS, 81.38 MiB/s 21318.50 IOPS, 83.28 MiB/s 22031.33 IOPS, 86.06 MiB/s 22520.00 IOPS, 87.97 MiB/s 22636.80 IOPS, 88.43 MiB/s 00:27:27.224 Latency(us) 00:27:27.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.224 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:27.224 Verification LBA range: start 0x0 length 0xa0000 00:27:27.224 nvme0n1 : 5.06 1644.92 6.43 0.00 0.00 77663.81 19055.85 78643.20 00:27:27.224 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:27.224 Verification LBA range: start 0xa0000 length 0xa0000 00:27:27.224 nvme0n1 : 5.07 1641.53 6.41 0.00 0.00 77819.54 14720.39 79449.80 00:27:27.224 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:27.224 Verification LBA range: start 0x0 length 0xbd0bd 00:27:27.224 nvme1n1 : 5.07 2991.89 11.69 0.00 0.00 42527.12 3478.45 76626.71 00:27:27.224 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:27.224 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:27:27.224 nvme1n1 : 5.06 3024.16 11.81 0.00 0.00 42094.59 3856.54 82272.89 00:27:27.224 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:27.224 Verification LBA range: start 0x0 length 0x80000 00:27:27.224 nvme2n1 : 5.05 1621.18 6.33 0.00 0.00 78311.34 19761.62 76626.71 00:27:27.224 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:27.224 Verification LBA range: start 0x80000 length 0x80000 00:27:27.224 nvme2n1 : 5.07 1640.88 6.41 0.00 0.00 77383.53 8318.03 73400.32 00:27:27.224 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:27.224 Verification LBA range: start 0x0 length 0x80000 00:27:27.224 nvme2n2 : 5.07 1640.04 6.41 0.00 0.00 77219.26 13611.32 76626.71 00:27:27.224 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:27.224 Verification LBA range: start 0x80000 length 0x80000 00:27:27.224 nvme2n2 : 5.08 1639.29 6.40 0.00 0.00 77264.94 8469.27 67754.14 00:27:27.224 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:27.224 Verification LBA range: start 0x0 length 0x80000 00:27:27.224 nvme2n3 : 5.08 1662.01 6.49 0.00 0.00 76030.21 5747.00 66544.25 00:27:27.224 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:27.224 Verification LBA range: start 0x80000 length 0x80000 00:27:27.224 nvme2n3 : 5.08 1638.80 6.40 0.00 0.00 77116.94 9175.04 71787.13 00:27:27.224 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:27.224 Verification LBA range: start 0x0 length 0x20000 00:27:27.224 nvme3n1 : 5.08 1638.49 6.40 0.00 0.00 76957.08 8620.50 81869.59 00:27:27.224 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:27.224 Verification LBA range: start 0x20000 length 0x20000 00:27:27.224 nvme3n1 : 5.07 1640.32 6.41 0.00 0.00 76874.42 8973.39 94775.14 00:27:27.224 =================================================================================================================== 00:27:27.224 Total : 22423.51 87.59 0.00 0.00 67893.42 3478.45 94775.14 00:27:28.607 00:27:28.607 real 0m7.346s 00:27:28.607 user 0m11.798s 00:27:28.607 sys 0m1.751s 00:27:28.607 20:25:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:28.607 20:25:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:27:28.607 ************************************ 00:27:28.607 END TEST bdev_verify 00:27:28.607 ************************************ 00:27:28.607 20:25:23 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:28.607 20:25:23 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:27:28.607 20:25:23 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:28.607 20:25:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:28.607 ************************************ 00:27:28.607 START TEST bdev_verify_big_io 00:27:28.607 ************************************ 00:27:28.607 20:25:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:28.607 [2024-10-01 20:25:23.572478] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:27:28.607 [2024-10-01 20:25:23.572578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71083 ] 00:27:28.607 [2024-10-01 20:25:23.718547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:28.869 [2024-10-01 20:25:23.906345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.869 [2024-10-01 20:25:23.906611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.439 Running I/O for 5 seconds... 00:27:35.544 888.00 IOPS, 55.50 MiB/s 2321.50 IOPS, 145.09 MiB/s 2986.67 IOPS, 186.67 MiB/s 00:27:35.544 Latency(us) 00:27:35.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.544 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:35.544 Verification LBA range: start 0x0 length 0xa000 00:27:35.544 nvme0n1 : 5.81 132.24 8.26 0.00 0.00 929960.04 154060.01 1206669.00 00:27:35.544 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:35.544 Verification LBA range: start 0xa000 length 0xa000 00:27:35.544 nvme0n1 : 6.01 90.49 5.66 0.00 0.00 1252339.58 140347.86 1884210.41 00:27:35.544 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:35.544 Verification LBA range: start 0x0 length 0xbd0b 00:27:35.544 nvme1n1 : 6.01 149.02 9.31 0.00 0.00 800399.22 7662.67 1045349.61 00:27:35.544 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:35.544 Verification LBA range: start 0xbd0b length 0xbd0b 00:27:35.544 nvme1n1 : 5.92 172.92 10.81 0.00 0.00 650267.77 54848.59 771106.66 00:27:35.544 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:35.544 Verification LBA range: start 0x0 length 0x8000 00:27:35.544 nvme2n1 : 6.13 114.75 7.17 0.00 0.00 992286.65 88725.66 1238932.87 00:27:35.544 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:35.544 Verification LBA range: start 0x8000 length 0x8000 00:27:35.544 nvme2n1 : 6.01 143.64 8.98 0.00 0.00 748056.10 114536.76 1155046.79 00:27:35.544 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:35.544 Verification LBA range: start 0x0 length 0x8000 00:27:35.544 nvme2n2 : 6.02 79.62 4.98 0.00 0.00 1378293.12 195196.46 2426243.54 00:27:35.544 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:35.544 Verification LBA range: start 0x8000 length 0x8000 00:27:35.545 nvme2n2 : 6.12 130.74 8.17 0.00 0.00 808985.10 5394.12 2684354.56 00:27:35.545 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:35.545 Verification LBA range: start 0x0 length 0x8000 00:27:35.545 nvme2n3 : 6.15 150.72 9.42 0.00 0.00 718441.24 10384.94 1703532.70 00:27:35.545 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:35.545 Verification LBA range: start 0x8000 length 0x8000 00:27:35.545 nvme2n3 : 6.12 86.22 5.39 0.00 0.00 1422397.77 170998.55 1806777.11 00:27:35.545 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:35.545 Verification LBA range: start 0x0 length 0x2000 00:27:35.545 nvme3n1 : 6.15 96.68 6.04 0.00 0.00 1081101.72 8418.86 2916654.47 00:27:35.545 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:35.545 Verification LBA range: start 0x2000 length 0x2000 00:27:35.545 nvme3n1 : 5.92 140.43 8.78 0.00 0.00 845810.54 178257.92 1000180.18 00:27:35.545 =================================================================================================================== 00:27:35.545 Total : 1487.48 92.97 0.00 0.00 914502.21 5394.12 2916654.47 00:27:37.004 00:27:37.004 real 0m8.621s 00:27:37.004 user 0m15.761s 00:27:37.004 sys 0m0.492s 00:27:37.004 20:25:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:37.004 20:25:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.004 ************************************ 00:27:37.004 END TEST bdev_verify_big_io 00:27:37.004 ************************************ 00:27:37.004 20:25:32 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:37.004 20:25:32 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:27:37.004 20:25:32 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:37.004 20:25:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:37.004 ************************************ 00:27:37.004 START TEST bdev_write_zeroes 00:27:37.004 ************************************ 00:27:37.004 20:25:32 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:37.265 [2024-10-01 20:25:32.236149] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:27:37.265 [2024-10-01 20:25:32.236320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71204 ] 00:27:37.265 [2024-10-01 20:25:32.393270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.526 [2024-10-01 20:25:32.559881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.098 Running I/O for 1 seconds... 00:27:39.041 67360.00 IOPS, 263.12 MiB/s 00:27:39.041 Latency(us) 00:27:39.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.041 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:39.041 nvme0n1 : 1.02 9669.11 37.77 0.00 0.00 13225.71 6351.95 21778.12 00:27:39.041 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:39.041 nvme1n1 : 1.02 18537.53 72.41 0.00 0.00 6890.79 3906.95 13510.50 00:27:39.041 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:39.041 nvme2n1 : 1.03 9615.54 37.56 0.00 0.00 13221.06 7309.78 24197.91 00:27:39.041 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:39.041 nvme2n2 : 1.03 9603.50 37.51 0.00 0.00 13229.35 7461.02 24500.38 00:27:39.041 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:39.041 nvme2n3 : 1.03 9591.42 37.47 0.00 0.00 13236.44 7561.85 24903.68 00:27:39.041 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:39.041 nvme3n1 : 1.03 9579.23 37.42 0.00 0.00 13246.85 7713.08 25206.15 00:27:39.041 =================================================================================================================== 00:27:39.041 Total : 66596.33 260.14 0.00 0.00 11468.98 3906.95 25206.15 00:27:40.430 00:27:40.430 real 0m3.170s 00:27:40.430 user 0m2.364s 00:27:40.430 sys 0m0.640s 00:27:40.430 20:25:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:40.430 20:25:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:27:40.430 ************************************ 00:27:40.430 END TEST bdev_write_zeroes 00:27:40.430 ************************************ 00:27:40.430 20:25:35 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:40.430 20:25:35 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:27:40.430 20:25:35 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:40.430 20:25:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:40.430 ************************************ 00:27:40.430 START TEST bdev_json_nonenclosed 00:27:40.430 ************************************ 00:27:40.430 20:25:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:40.430 [2024-10-01 20:25:35.436687] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:27:40.430 [2024-10-01 20:25:35.436824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71264 ] 00:27:40.430 [2024-10-01 20:25:35.584454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.688 [2024-10-01 20:25:35.774963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.689 [2024-10-01 20:25:35.775046] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:40.689 [2024-10-01 20:25:35.775064] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:40.689 [2024-10-01 20:25:35.775073] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:40.950 00:27:40.950 real 0m0.698s 00:27:40.950 user 0m0.486s 00:27:40.950 sys 0m0.107s 00:27:40.950 20:25:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:40.950 20:25:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:27:40.950 ************************************ 00:27:40.950 END TEST bdev_json_nonenclosed 00:27:40.950 ************************************ 00:27:40.950 20:25:36 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:40.950 20:25:36 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:27:40.950 20:25:36 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:40.950 20:25:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:40.950 ************************************ 00:27:40.950 START TEST bdev_json_nonarray 00:27:40.950 ************************************ 00:27:40.950 20:25:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:41.210 [2024-10-01 20:25:36.172759] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:27:41.210 [2024-10-01 20:25:36.172882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71284 ] 00:27:41.210 [2024-10-01 20:25:36.325970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.471 [2024-10-01 20:25:36.521352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.471 [2024-10-01 20:25:36.521442] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:41.471 [2024-10-01 20:25:36.521459] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:41.471 [2024-10-01 20:25:36.521469] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:41.731 00:27:41.731 real 0m0.705s 00:27:41.731 user 0m0.505s 00:27:41.731 sys 0m0.094s 00:27:41.731 20:25:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:41.731 20:25:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:27:41.731 ************************************ 00:27:41.731 END TEST bdev_json_nonarray 00:27:41.731 ************************************ 00:27:41.731 20:25:36 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:27:41.731 20:25:36 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:27:41.731 20:25:36 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:27:41.731 20:25:36 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:27:41.731 20:25:36 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:27:41.731 20:25:36 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:41.731 20:25:36 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:41.731 20:25:36 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:27:41.731 20:25:36 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:27:41.731 20:25:36 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:27:41.731 20:25:36 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:27:41.731 20:25:36 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:42.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:50.111 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:28:50.111 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:50.111 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:50.111 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:28:50.111 ************************************ 00:28:50.111 END TEST blockdev_xnvme 00:28:50.111 ************************************ 00:28:50.111 00:28:50.111 real 2m2.082s 00:28:50.111 user 1m41.021s 00:28:50.111 sys 3m15.630s 00:28:50.111 20:26:42 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:50.111 20:26:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:50.111 20:26:42 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:28:50.111 20:26:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:50.111 20:26:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:50.111 20:26:42 -- common/autotest_common.sh@10 -- # set +x 00:28:50.111 ************************************ 00:28:50.111 START TEST ublk 00:28:50.111 ************************************ 00:28:50.111 20:26:42 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:28:50.111 * Looking for test storage... 00:28:50.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:28:50.111 20:26:42 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:50.111 20:26:42 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:28:50.111 20:26:42 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:50.111 20:26:42 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:50.111 20:26:42 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:50.111 20:26:42 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:50.111 20:26:42 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:50.111 20:26:42 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:28:50.111 20:26:42 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:28:50.111 20:26:42 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:28:50.111 20:26:42 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:28:50.111 20:26:42 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:28:50.111 20:26:42 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:28:50.111 20:26:42 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:28:50.111 20:26:42 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:50.111 20:26:42 ublk -- scripts/common.sh@344 -- # case "$op" in 00:28:50.111 20:26:42 ublk -- scripts/common.sh@345 -- # : 1 00:28:50.111 20:26:42 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:50.111 20:26:42 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:50.111 20:26:42 ublk -- scripts/common.sh@365 -- # decimal 1 00:28:50.111 20:26:42 ublk -- scripts/common.sh@353 -- # local d=1 00:28:50.111 20:26:42 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:50.111 20:26:42 ublk -- scripts/common.sh@355 -- # echo 1 00:28:50.111 20:26:42 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:28:50.111 20:26:42 ublk -- scripts/common.sh@366 -- # decimal 2 00:28:50.111 20:26:42 ublk -- scripts/common.sh@353 -- # local d=2 00:28:50.111 20:26:42 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:50.111 20:26:42 ublk -- scripts/common.sh@355 -- # echo 2 00:28:50.111 20:26:42 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:28:50.111 20:26:42 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:50.111 20:26:42 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:50.111 20:26:42 ublk -- scripts/common.sh@368 -- # return 0 00:28:50.111 20:26:42 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:50.111 20:26:42 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.111 --rc genhtml_branch_coverage=1 00:28:50.111 --rc genhtml_function_coverage=1 00:28:50.111 --rc genhtml_legend=1 00:28:50.111 --rc geninfo_all_blocks=1 00:28:50.111 --rc geninfo_unexecuted_blocks=1 00:28:50.111 00:28:50.111 ' 00:28:50.111 20:26:42 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.111 --rc genhtml_branch_coverage=1 00:28:50.111 --rc genhtml_function_coverage=1 00:28:50.111 --rc genhtml_legend=1 00:28:50.111 --rc geninfo_all_blocks=1 00:28:50.111 --rc geninfo_unexecuted_blocks=1 00:28:50.111 00:28:50.111 ' 00:28:50.111 20:26:42 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.111 --rc genhtml_branch_coverage=1 00:28:50.111 --rc genhtml_function_coverage=1 00:28:50.111 --rc genhtml_legend=1 00:28:50.111 --rc geninfo_all_blocks=1 00:28:50.111 --rc geninfo_unexecuted_blocks=1 00:28:50.111 00:28:50.111 ' 00:28:50.111 20:26:42 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.111 --rc genhtml_branch_coverage=1 00:28:50.111 --rc genhtml_function_coverage=1 00:28:50.111 --rc genhtml_legend=1 00:28:50.111 --rc geninfo_all_blocks=1 00:28:50.111 --rc geninfo_unexecuted_blocks=1 00:28:50.111 00:28:50.111 ' 00:28:50.112 20:26:42 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:28:50.112 20:26:42 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:28:50.112 20:26:42 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:28:50.112 20:26:42 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:28:50.112 20:26:42 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:28:50.112 20:26:42 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:28:50.112 20:26:42 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:28:50.112 20:26:42 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:28:50.112 20:26:42 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:28:50.112 20:26:42 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:28:50.112 20:26:42 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:28:50.112 20:26:42 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:28:50.112 20:26:42 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:28:50.112 20:26:42 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:28:50.112 20:26:42 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:28:50.112 20:26:42 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:28:50.112 20:26:42 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:28:50.112 20:26:42 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:28:50.112 20:26:42 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:28:50.112 20:26:42 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:28:50.112 20:26:42 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:50.112 20:26:42 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:50.112 20:26:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:28:50.112 ************************************ 00:28:50.112 START TEST test_save_ublk_config 00:28:50.112 ************************************ 00:28:50.112 20:26:42 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:28:50.112 20:26:42 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:28:50.112 20:26:42 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=71602 00:28:50.112 20:26:42 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:28:50.112 20:26:42 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:28:50.112 20:26:42 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 71602 00:28:50.112 20:26:42 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71602 ']' 00:28:50.112 20:26:42 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.112 20:26:42 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:50.112 20:26:42 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.112 20:26:42 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:50.112 20:26:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:50.112 [2024-10-01 20:26:42.665707] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:28:50.112 [2024-10-01 20:26:42.665865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71602 ] 00:28:50.112 [2024-10-01 20:26:42.808456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.112 [2024-10-01 20:26:43.008917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.112 20:26:43 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:50.112 20:26:43 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:28:50.112 20:26:43 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:28:50.112 20:26:43 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:28:50.112 20:26:43 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.112 20:26:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:50.684 [2024-10-01 20:26:45.821751] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:50.684 [2024-10-01 20:26:45.822641] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:50.684 malloc0 00:28:50.684 [2024-10-01 20:26:45.855551] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:28:50.684 [2024-10-01 20:26:45.855635] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:28:50.684 [2024-10-01 20:26:45.855647] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:28:50.684 [2024-10-01 20:26:45.855654] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:28:53.228 [2024-10-01 20:26:48.148742] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:53.228 [2024-10-01 20:26:48.148794] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:54.168 [2024-10-01 20:26:49.044755] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:54.168 [2024-10-01 20:26:49.044873] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:28:55.673 [2024-10-01 20:26:50.480009] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:28:55.673 0 00:28:55.673 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.673 20:26:50 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:28:55.673 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.673 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:55.673 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.673 20:26:50 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:28:55.673 "subsystems": [ 00:28:55.673 { 00:28:55.673 "subsystem": "fsdev", 00:28:55.673 "config": [ 00:28:55.673 { 00:28:55.673 "method": "fsdev_set_opts", 00:28:55.673 "params": { 00:28:55.673 "fsdev_io_pool_size": 65535, 00:28:55.673 "fsdev_io_cache_size": 256 00:28:55.673 } 00:28:55.673 } 00:28:55.673 ] 00:28:55.673 }, 00:28:55.673 { 00:28:55.673 "subsystem": "keyring", 00:28:55.673 "config": [] 00:28:55.673 }, 00:28:55.673 { 00:28:55.673 "subsystem": "iobuf", 00:28:55.673 "config": [ 00:28:55.673 { 00:28:55.673 "method": "iobuf_set_options", 00:28:55.673 "params": { 00:28:55.673 "small_pool_count": 8192, 00:28:55.673 "large_pool_count": 1024, 00:28:55.673 "small_bufsize": 8192, 00:28:55.673 "large_bufsize": 135168 00:28:55.673 } 00:28:55.673 } 00:28:55.673 ] 00:28:55.673 }, 00:28:55.673 { 00:28:55.673 "subsystem": "sock", 00:28:55.673 "config": [ 00:28:55.673 { 00:28:55.673 "method": "sock_set_default_impl", 00:28:55.673 "params": { 00:28:55.673 "impl_name": "posix" 00:28:55.673 } 00:28:55.673 }, 00:28:55.673 { 00:28:55.673 "method": "sock_impl_set_options", 00:28:55.673 "params": { 00:28:55.673 "impl_name": "ssl", 00:28:55.673 "recv_buf_size": 4096, 00:28:55.673 "send_buf_size": 4096, 00:28:55.673 "enable_recv_pipe": true, 00:28:55.673 "enable_quickack": false, 00:28:55.673 "enable_placement_id": 0, 00:28:55.673 "enable_zerocopy_send_server": true, 00:28:55.673 "enable_zerocopy_send_client": false, 00:28:55.673 "zerocopy_threshold": 0, 00:28:55.673 "tls_version": 0, 00:28:55.673 "enable_ktls": false 00:28:55.673 } 00:28:55.673 }, 00:28:55.673 { 00:28:55.673 "method": "sock_impl_set_options", 00:28:55.673 "params": { 00:28:55.673 "impl_name": "posix", 00:28:55.673 "recv_buf_size": 2097152, 00:28:55.673 "send_buf_size": 2097152, 00:28:55.673 "enable_recv_pipe": true, 00:28:55.673 "enable_quickack": false, 00:28:55.673 "enable_placement_id": 0, 00:28:55.673 "enable_zerocopy_send_server": true, 00:28:55.673 "enable_zerocopy_send_client": false, 00:28:55.673 "zerocopy_threshold": 0, 00:28:55.673 "tls_version": 0, 00:28:55.673 "enable_ktls": false 00:28:55.673 } 00:28:55.673 } 00:28:55.673 ] 00:28:55.673 }, 00:28:55.673 { 00:28:55.673 "subsystem": "vmd", 00:28:55.673 "config": [] 00:28:55.673 }, 00:28:55.673 { 00:28:55.673 "subsystem": "accel", 00:28:55.673 "config": [ 00:28:55.673 { 00:28:55.673 "method": "accel_set_options", 00:28:55.673 "params": { 00:28:55.673 "small_cache_size": 128, 00:28:55.673 "large_cache_size": 16, 00:28:55.673 "task_count": 2048, 00:28:55.673 "sequence_count": 2048, 00:28:55.673 "buf_count": 2048 00:28:55.673 } 00:28:55.673 } 00:28:55.673 ] 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "subsystem": "bdev", 00:28:55.674 "config": [ 00:28:55.674 { 00:28:55.674 "method": "bdev_set_options", 00:28:55.674 "params": { 00:28:55.674 "bdev_io_pool_size": 65535, 00:28:55.674 "bdev_io_cache_size": 256, 00:28:55.674 "bdev_auto_examine": true, 00:28:55.674 "iobuf_small_cache_size": 128, 00:28:55.674 "iobuf_large_cache_size": 16, 00:28:55.674 "bdev_io_stack_size": 4096 00:28:55.674 } 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "method": "bdev_raid_set_options", 00:28:55.674 "params": { 00:28:55.674 "process_window_size_kb": 1024, 00:28:55.674 "process_max_bandwidth_mb_sec": 0 00:28:55.674 } 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "method": "bdev_iscsi_set_options", 00:28:55.674 "params": { 00:28:55.674 "timeout_sec": 30 00:28:55.674 } 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "method": "bdev_nvme_set_options", 00:28:55.674 "params": { 00:28:55.674 "action_on_timeout": "none", 00:28:55.674 "timeout_us": 0, 00:28:55.674 "timeout_admin_us": 0, 00:28:55.674 "keep_alive_timeout_ms": 10000, 00:28:55.674 "arbitration_burst": 0, 00:28:55.674 "low_priority_weight": 0, 00:28:55.674 "medium_priority_weight": 0, 00:28:55.674 "high_priority_weight": 0, 00:28:55.674 "nvme_adminq_poll_period_us": 10000, 00:28:55.674 "nvme_ioq_poll_period_us": 0, 00:28:55.674 "io_queue_requests": 0, 00:28:55.674 "delay_cmd_submit": true, 00:28:55.674 "transport_retry_count": 4, 00:28:55.674 "bdev_retry_count": 3, 00:28:55.674 "transport_ack_timeout": 0, 00:28:55.674 "ctrlr_loss_timeout_sec": 0, 00:28:55.674 "reconnect_delay_sec": 0, 00:28:55.674 "fast_io_fail_timeout_sec": 0, 00:28:55.674 "disable_auto_failback": false, 00:28:55.674 "generate_uuids": false, 00:28:55.674 "transport_tos": 0, 00:28:55.674 "nvme_error_stat": false, 00:28:55.674 "rdma_srq_size": 0, 00:28:55.674 "io_path_stat": false, 00:28:55.674 "allow_accel_sequence": false, 00:28:55.674 "rdma_max_cq_size": 0, 00:28:55.674 "rdma_cm_event_timeout_ms": 0, 00:28:55.674 "dhchap_digests": [ 00:28:55.674 "sha256", 00:28:55.674 "sha384", 00:28:55.674 "sha512" 00:28:55.674 ], 00:28:55.674 "dhchap_dhgroups": [ 00:28:55.674 "null", 00:28:55.674 "ffdhe2048", 00:28:55.674 "ffdhe3072", 00:28:55.674 "ffdhe4096", 00:28:55.674 "ffdhe6144", 00:28:55.674 "ffdhe8192" 00:28:55.674 ] 00:28:55.674 } 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "method": "bdev_nvme_set_hotplug", 00:28:55.674 "params": { 00:28:55.674 "period_us": 100000, 00:28:55.674 "enable": false 00:28:55.674 } 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "method": "bdev_malloc_create", 00:28:55.674 "params": { 00:28:55.674 "name": "malloc0", 00:28:55.674 "num_blocks": 8192, 00:28:55.674 "block_size": 4096, 00:28:55.674 "physical_block_size": 4096, 00:28:55.674 "uuid": "01db3c54-62cb-4d15-9faf-9d4b05b1a4a5", 00:28:55.674 "optimal_io_boundary": 0, 00:28:55.674 "md_size": 0, 00:28:55.674 "dif_type": 0, 00:28:55.674 "dif_is_head_of_md": false, 00:28:55.674 "dif_pi_format": 0 00:28:55.674 } 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "method": "bdev_wait_for_examine" 00:28:55.674 } 00:28:55.674 ] 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "subsystem": "scsi", 00:28:55.674 "config": null 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "subsystem": "scheduler", 00:28:55.674 "config": [ 00:28:55.674 { 00:28:55.674 "method": "framework_set_scheduler", 00:28:55.674 "params": { 00:28:55.674 "name": "static" 00:28:55.674 } 00:28:55.674 } 00:28:55.674 ] 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "subsystem": "vhost_scsi", 00:28:55.674 "config": [] 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "subsystem": "vhost_blk", 00:28:55.674 "config": [] 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "subsystem": "ublk", 00:28:55.674 "config": [ 00:28:55.674 { 00:28:55.674 "method": "ublk_create_target", 00:28:55.674 "params": { 00:28:55.674 "cpumask": "1" 00:28:55.674 } 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "method": "ublk_start_disk", 00:28:55.674 "params": { 00:28:55.674 "bdev_name": "malloc0", 00:28:55.674 "ublk_id": 0, 00:28:55.674 "num_queues": 1, 00:28:55.674 "queue_depth": 128 00:28:55.674 } 00:28:55.674 } 00:28:55.674 ] 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "subsystem": "nbd", 00:28:55.674 "config": [] 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "subsystem": "nvmf", 00:28:55.674 "config": [ 00:28:55.674 { 00:28:55.674 "method": "nvmf_set_config", 00:28:55.674 "params": { 00:28:55.674 "discovery_filter": "match_any", 00:28:55.674 "admin_cmd_passthru": { 00:28:55.674 "identify_ctrlr": false 00:28:55.674 }, 00:28:55.674 "dhchap_digests": [ 00:28:55.674 "sha256", 00:28:55.674 "sha384", 00:28:55.674 "sha512" 00:28:55.674 ], 00:28:55.674 "dhchap_dhgroups": [ 00:28:55.674 "null", 00:28:55.674 "ffdhe2048", 00:28:55.674 "ffdhe3072", 00:28:55.674 "ffdhe4096", 00:28:55.674 "ffdhe6144", 00:28:55.674 "ffdhe8192" 00:28:55.674 ] 00:28:55.674 } 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "method": "nvmf_set_max_subsystems", 00:28:55.674 "params": { 00:28:55.674 "max_subsystems": 1024 00:28:55.674 } 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "method": "nvmf_set_crdt", 00:28:55.674 "params": { 00:28:55.674 "crdt1": 0, 00:28:55.674 "crdt2": 0, 00:28:55.674 "crdt3": 0 00:28:55.674 } 00:28:55.674 } 00:28:55.674 ] 00:28:55.674 }, 00:28:55.674 { 00:28:55.674 "subsystem": "iscsi", 00:28:55.674 "config": [ 00:28:55.674 { 00:28:55.674 "method": "iscsi_set_options", 00:28:55.674 "params": { 00:28:55.674 "node_base": "iqn.2016-06.io.spdk", 00:28:55.674 "max_sessions": 128, 00:28:55.674 "max_connections_per_session": 2, 00:28:55.674 "max_queue_depth": 64, 00:28:55.674 "default_time2wait": 2, 00:28:55.674 "default_time2retain": 20, 00:28:55.674 "first_burst_length": 8192, 00:28:55.674 "immediate_data": true, 00:28:55.674 "allow_duplicated_isid": false, 00:28:55.674 "error_recovery_level": 0, 00:28:55.674 "nop_timeout": 60, 00:28:55.674 "nop_in_interval": 30, 00:28:55.674 "disable_chap": false, 00:28:55.674 "require_chap": false, 00:28:55.674 "mutual_chap": false, 00:28:55.674 "chap_group": 0, 00:28:55.674 "max_large_datain_per_connection": 64, 00:28:55.674 "max_r2t_per_connection": 4, 00:28:55.674 "pdu_pool_size": 36864, 00:28:55.674 "immediate_data_pool_size": 16384, 00:28:55.674 "data_out_pool_size": 2048 00:28:55.674 } 00:28:55.674 } 00:28:55.674 ] 00:28:55.674 } 00:28:55.674 ] 00:28:55.674 }' 00:28:55.674 20:26:50 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 71602 00:28:55.674 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71602 ']' 00:28:55.674 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71602 00:28:55.674 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:28:55.675 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:55.675 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71602 00:28:55.675 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:55.675 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:55.675 killing process with pid 71602 00:28:55.675 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71602' 00:28:55.675 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71602 00:28:55.675 20:26:50 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71602 00:28:56.704 [2024-10-01 20:26:51.711257] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:28:56.704 [2024-10-01 20:26:51.745812] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:56.704 [2024-10-01 20:26:51.745954] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:28:56.704 [2024-10-01 20:26:51.752732] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:56.704 [2024-10-01 20:26:51.752786] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:28:56.704 [2024-10-01 20:26:51.752794] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:28:56.704 [2024-10-01 20:26:51.752817] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:28:56.704 [2024-10-01 20:26:51.752952] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:28:58.616 20:26:53 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=71732 00:28:58.616 20:26:53 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 71732 00:28:58.616 20:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71732 ']' 00:28:58.616 20:26:53 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:28:58.616 20:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.616 20:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.616 20:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.616 20:26:53 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:28:58.616 "subsystems": [ 00:28:58.616 { 00:28:58.616 "subsystem": "fsdev", 00:28:58.616 "config": [ 00:28:58.616 { 00:28:58.616 "method": "fsdev_set_opts", 00:28:58.616 "params": { 00:28:58.616 "fsdev_io_pool_size": 65535, 00:28:58.616 "fsdev_io_cache_size": 256 00:28:58.616 } 00:28:58.616 } 00:28:58.616 ] 00:28:58.616 }, 00:28:58.616 { 00:28:58.616 "subsystem": "keyring", 00:28:58.616 "config": [] 00:28:58.616 }, 00:28:58.616 { 00:28:58.616 "subsystem": "iobuf", 00:28:58.616 "config": [ 00:28:58.616 { 00:28:58.616 "method": "iobuf_set_options", 00:28:58.616 "params": { 00:28:58.616 "small_pool_count": 8192, 00:28:58.616 "large_pool_count": 1024, 00:28:58.616 "small_bufsize": 8192, 00:28:58.616 "large_bufsize": 135168 00:28:58.616 } 00:28:58.616 } 00:28:58.616 ] 00:28:58.616 }, 00:28:58.616 { 00:28:58.616 "subsystem": "sock", 00:28:58.616 "config": [ 00:28:58.616 { 00:28:58.616 "method": "sock_set_default_impl", 00:28:58.616 "params": { 00:28:58.616 "impl_name": "posix" 00:28:58.616 } 00:28:58.616 }, 00:28:58.616 { 00:28:58.616 "method": "sock_impl_set_options", 00:28:58.616 "params": { 00:28:58.616 "impl_name": "ssl", 00:28:58.616 "recv_buf_size": 4096, 00:28:58.616 "send_buf_size": 4096, 00:28:58.616 "enable_recv_pipe": true, 00:28:58.616 "enable_quickack": false, 00:28:58.616 "enable_placement_id": 0, 00:28:58.616 "enable_zerocopy_send_server": true, 00:28:58.616 "enable_zerocopy_send_client": false, 00:28:58.616 "zerocopy_threshold": 0, 00:28:58.616 "tls_version": 0, 00:28:58.616 "enable_ktls": false 00:28:58.616 } 00:28:58.616 }, 00:28:58.616 { 00:28:58.616 "method": "sock_impl_set_options", 00:28:58.616 "params": { 00:28:58.616 "impl_name": "posix", 00:28:58.616 "recv_buf_size": 2097152, 00:28:58.616 "send_buf_size": 2097152, 00:28:58.616 "enable_recv_pipe": true, 00:28:58.616 "enable_quickack": false, 00:28:58.616 "enable_placement_id": 0, 00:28:58.616 "enable_zerocopy_send_server": true, 00:28:58.616 "enable_zerocopy_send_client": false, 00:28:58.616 "zerocopy_threshold": 0, 00:28:58.616 "tls_version": 0, 00:28:58.616 "enable_ktls": false 00:28:58.616 } 00:28:58.616 } 00:28:58.616 ] 00:28:58.616 }, 00:28:58.616 { 00:28:58.616 "subsystem": "vmd", 00:28:58.616 "config": [] 00:28:58.616 }, 00:28:58.616 { 00:28:58.616 "subsystem": "accel", 00:28:58.616 "config": [ 00:28:58.616 { 00:28:58.616 "method": "accel_set_options", 00:28:58.616 "params": { 00:28:58.616 "small_cache_size": 128, 00:28:58.616 "large_cache_size": 16, 00:28:58.616 "task_count": 2048, 00:28:58.616 "sequence_count": 2048, 00:28:58.616 "buf_count": 2048 00:28:58.617 } 00:28:58.617 } 00:28:58.617 ] 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "subsystem": "bdev", 00:28:58.617 "config": [ 00:28:58.617 { 00:28:58.617 "method": "bdev_set_options", 00:28:58.617 "params": { 00:28:58.617 "bdev_io_pool_size": 65535, 00:28:58.617 "bdev_io_cache_size": 256, 00:28:58.617 "bdev_auto_examine": true, 00:28:58.617 "iobuf_small_cache_size": 128, 00:28:58.617 "iobuf_large_cache_size": 16, 00:28:58.617 "bdev_io_stack_size": 4096 00:28:58.617 } 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "method": "bdev_raid_set_options", 00:28:58.617 "params": { 00:28:58.617 "process_window_size_kb": 1024, 00:28:58.617 "process_max_bandwidth_mb_sec": 0 00:28:58.617 } 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "method": "bdev_iscsi_set_options", 00:28:58.617 "params": { 00:28:58.617 "timeout_sec": 30 00:28:58.617 } 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "method": "bdev_nvme_set_options", 00:28:58.617 "params": { 00:28:58.617 "action_on_timeout": "none", 00:28:58.617 "timeout_us": 0, 00:28:58.617 "timeout_admin_us": 0, 00:28:58.617 "keep_alive_timeout_ms": 10000, 00:28:58.617 "arbitration_burst": 0, 00:28:58.617 "low_priority_weight": 0, 00:28:58.617 "medium_priority_weight": 0, 00:28:58.617 "high_priority_weight": 0, 00:28:58.617 "nvme_adminq_poll_period_us": 10000, 00:28:58.617 "nvme_ioq_poll_period_us": 0, 00:28:58.617 "io_queue_requests": 0, 00:28:58.617 "delay_cmd_submit": true, 00:28:58.617 "transport_retry_count": 4, 00:28:58.617 "bdev_retry_count": 3, 00:28:58.617 "transport_ack_timeout": 0, 00:28:58.617 "ctrlr_loss_timeout_sec": 0, 00:28:58.617 "reconnect_delay_sec": 0, 00:28:58.617 "fast_io_fail_timeout_sec": 0, 00:28:58.617 "disable_auto_failback": false, 00:28:58.617 "generate_uuids": false, 00:28:58.617 "transport_tos": 0, 00:28:58.617 "nvme_error_stat": false, 00:28:58.617 "rdma_srq_size": 0, 00:28:58.617 "io_path_stat": false, 00:28:58.617 "allow_accel_sequence": false, 00:28:58.617 "rdma_max_cq_size": 0, 00:28:58.617 "rdma_cm_event_timeout_ms": 0, 00:28:58.617 "dhchap_digests": [ 00:28:58.617 "sha256", 00:28:58.617 "sha384", 00:28:58.617 "sha512" 00:28:58.617 ], 00:28:58.617 "dhchap_dhgroups": [ 00:28:58.617 "null", 00:28:58.617 "ffdhe2048", 00:28:58.617 "ffdhe3072", 00:28:58.617 "ffdhe4096", 00:28:58.617 "ffdhe6144", 00:28:58.617 "ffdhe8192" 00:28:58.617 ] 00:28:58.617 } 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "method": "bdev_nvme_set_hotplug", 00:28:58.617 "params": { 00:28:58.617 "period_us": 100000, 00:28:58.617 "enable": false 00:28:58.617 } 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "method": "bdev_malloc_create", 00:28:58.617 "params": { 00:28:58.617 "name": "malloc0", 00:28:58.617 "num_blocks": 8192, 00:28:58.617 "block_size": 4096, 00:28:58.617 "physical_block_size": 4096, 00:28:58.617 "uuid": "01db3c54-62cb-4d15-9faf-9d4b05b1a4a5", 00:28:58.617 "optimal_io_boundary": 0, 00:28:58.617 "md_size": 0, 00:28:58.617 "dif_type": 0, 00:28:58.617 "dif_is_head_of_md": false, 00:28:58.617 "dif_pi_format": 0 00:28:58.617 } 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "method": "bdev_wait_for_examine" 00:28:58.617 } 00:28:58.617 ] 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "subsystem": "scsi", 00:28:58.617 "config": null 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "subsystem": "scheduler", 00:28:58.617 "config": [ 00:28:58.617 { 00:28:58.617 "method": "framework_set_scheduler", 00:28:58.617 "params": { 00:28:58.617 "name": "static" 00:28:58.617 } 00:28:58.617 } 00:28:58.617 ] 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "subsystem": "vhost_scsi", 00:28:58.617 "config": [] 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "subsystem": "vhost_blk", 00:28:58.617 "config": [] 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "subsystem": "ublk", 00:28:58.617 "config": [ 00:28:58.617 { 00:28:58.617 "method": "ublk_create_target", 00:28:58.617 "params": { 00:28:58.617 "cpumask": "1" 00:28:58.617 } 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "method": "ublk_start_disk", 00:28:58.617 "params": { 00:28:58.617 "bdev_name": "malloc0", 00:28:58.617 "ublk_id": 0, 00:28:58.617 "num_queues": 1, 00:28:58.617 "queue_depth": 128 00:28:58.617 } 00:28:58.617 } 00:28:58.617 ] 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "subsystem": "nbd", 00:28:58.617 "config": [] 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "subsystem": "nvmf", 00:28:58.617 "config": [ 00:28:58.617 { 00:28:58.617 "method": "nvmf_set_config", 00:28:58.617 "params": { 00:28:58.617 "discovery_filter": "match_any", 00:28:58.617 "admin_cmd_passthru": { 00:28:58.617 "identify_ctrlr": false 00:28:58.617 }, 00:28:58.617 "dhchap_digests": [ 00:28:58.617 "sha256", 00:28:58.617 "sha384", 00:28:58.617 "sha512" 00:28:58.617 ], 00:28:58.617 "dhchap_dhgroups": [ 00:28:58.617 "null", 00:28:58.617 "ffdhe2048", 00:28:58.617 "ffdhe3072", 00:28:58.617 "ffdhe4096", 00:28:58.617 "ffdhe6144", 00:28:58.617 "ffdhe8192" 00:28:58.617 ] 00:28:58.617 } 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "method": "nvmf_set_max_subsystems", 00:28:58.617 "params": { 00:28:58.617 "max_subsystems": 1024 00:28:58.617 } 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "method": "nvmf_set_crdt", 00:28:58.617 "params": { 00:28:58.617 "crdt1": 0, 00:28:58.617 "crdt2": 0, 00:28:58.617 "crdt3": 0 00:28:58.617 } 00:28:58.617 } 00:28:58.617 ] 00:28:58.617 }, 00:28:58.617 { 00:28:58.617 "subsystem": "iscsi", 00:28:58.617 "config": [ 00:28:58.617 { 00:28:58.617 "method": "iscsi_set_options", 00:28:58.617 "params": { 00:28:58.617 "node_base": "iqn.2016-06.io.spdk", 00:28:58.617 "max_sessions": 128, 00:28:58.617 "max_connections_per_session": 2, 00:28:58.617 "max_queue_depth": 64, 00:28:58.617 "default_time2wait": 2, 00:28:58.617 "default_time2retain": 20, 00:28:58.617 "first_burst_length": 8192, 00:28:58.617 "immediate_data": true, 00:28:58.617 "allow_duplicated_isid": false, 00:28:58.617 "error_recovery_level": 0, 00:28:58.617 "nop_timeout": 60, 00:28:58.617 "nop_in_interval": 30, 00:28:58.617 "disable_chap": false, 00:28:58.617 "require_chap": false, 00:28:58.617 "mutual_chap": false, 00:28:58.617 "chap_group": 0, 00:28:58.617 "max_large_datain_per_connection": 64, 00:28:58.617 "max_r2t_per_connection": 4, 00:28:58.617 "pdu_pool_size": 36864, 00:28:58.617 "immediate_data_pool_size": 16384, 00:28:58.617 "data_out_pool_size": 2048 00:28:58.617 } 00:28:58.617 } 00:28:58.617 ] 00:28:58.617 } 00:28:58.617 ] 00:28:58.617 }' 00:28:58.617 20:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.617 20:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:58.617 [2024-10-01 20:26:53.488349] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:28:58.617 [2024-10-01 20:26:53.488476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71732 ] 00:28:58.617 [2024-10-01 20:26:53.634482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.617 [2024-10-01 20:26:53.818012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.555 [2024-10-01 20:26:54.655715] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:59.555 [2024-10-01 20:26:54.656415] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:59.555 [2024-10-01 20:26:54.663843] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:28:59.555 [2024-10-01 20:26:54.663932] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:28:59.555 [2024-10-01 20:26:54.663939] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:28:59.555 [2024-10-01 20:26:54.663944] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:28:59.555 [2024-10-01 20:26:54.672777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:59.555 [2024-10-01 20:26:54.672805] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:59.555 [2024-10-01 20:26:54.679735] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:59.555 [2024-10-01 20:26:54.679841] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:28:59.555 [2024-10-01 20:26:54.696717] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:28:59.555 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:59.555 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:28:59.555 20:26:54 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:28:59.555 20:26:54 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:28:59.555 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.555 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:59.555 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.814 20:26:54 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:28:59.814 20:26:54 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:28:59.814 20:26:54 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 71732 00:28:59.814 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71732 ']' 00:28:59.814 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71732 00:28:59.814 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:28:59.814 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.814 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71732 00:28:59.814 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:59.814 killing process with pid 71732 00:28:59.814 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:59.814 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71732' 00:28:59.814 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71732 00:28:59.814 20:26:54 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71732 00:29:00.752 [2024-10-01 20:26:55.817828] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:29:00.752 [2024-10-01 20:26:55.855741] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:00.752 [2024-10-01 20:26:55.855869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:29:00.752 [2024-10-01 20:26:55.864734] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:00.752 [2024-10-01 20:26:55.864798] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:29:00.752 [2024-10-01 20:26:55.864805] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:29:00.752 [2024-10-01 20:26:55.864827] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:29:00.752 [2024-10-01 20:26:55.864956] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:29:03.300 20:26:57 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:29:03.301 00:29:03.301 real 0m15.369s 00:29:03.301 user 0m4.872s 00:29:03.301 sys 0m3.523s 00:29:03.301 20:26:57 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:03.301 20:26:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:29:03.301 ************************************ 00:29:03.301 END TEST test_save_ublk_config 00:29:03.301 ************************************ 00:29:03.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.301 20:26:57 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71817 00:29:03.301 20:26:57 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:03.301 20:26:57 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71817 00:29:03.301 20:26:57 ublk -- common/autotest_common.sh@831 -- # '[' -z 71817 ']' 00:29:03.301 20:26:57 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.301 20:26:57 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:29:03.301 20:26:57 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:03.301 20:26:57 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.301 20:26:57 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:03.301 20:26:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:29:03.301 [2024-10-01 20:26:58.062392] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:29:03.301 [2024-10-01 20:26:58.062494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71817 ] 00:29:03.301 [2024-10-01 20:26:58.202756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:03.301 [2024-10-01 20:26:58.442863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.301 [2024-10-01 20:26:58.443095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.245 20:26:59 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:04.245 20:26:59 ublk -- common/autotest_common.sh@864 -- # return 0 00:29:04.245 20:26:59 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:29:04.245 20:26:59 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:04.245 20:26:59 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.245 20:26:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:29:04.245 ************************************ 00:29:04.245 START TEST test_create_ublk 00:29:04.245 ************************************ 00:29:04.245 20:26:59 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:29:04.245 20:26:59 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.245 20:26:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:04.245 [2024-10-01 20:26:59.132712] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:29:04.245 [2024-10-01 20:26:59.134015] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:29:04.245 20:26:59 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:29:04.245 20:26:59 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.245 20:26:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:04.245 20:26:59 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:29:04.245 20:26:59 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.245 20:26:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:04.245 [2024-10-01 20:26:59.291842] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:29:04.245 [2024-10-01 20:26:59.292156] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:29:04.245 [2024-10-01 20:26:59.292172] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:29:04.245 [2024-10-01 20:26:59.292178] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:29:04.245 [2024-10-01 20:26:59.299752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:29:04.245 [2024-10-01 20:26:59.299775] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:29:04.245 [2024-10-01 20:26:59.307726] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:29:04.245 [2024-10-01 20:26:59.308267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:29:04.245 [2024-10-01 20:26:59.318739] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:29:04.245 20:26:59 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:29:04.245 20:26:59 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.245 20:26:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:04.245 20:26:59 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:29:04.245 { 00:29:04.245 "ublk_device": "/dev/ublkb0", 00:29:04.245 "id": 0, 00:29:04.245 "queue_depth": 512, 00:29:04.245 "num_queues": 4, 00:29:04.245 "bdev_name": "Malloc0" 00:29:04.245 } 00:29:04.245 ]' 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:29:04.245 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:29:04.507 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:29:04.507 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:29:04.507 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:29:04.507 20:26:59 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:29:04.507 20:26:59 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:29:04.507 20:26:59 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:29:04.507 20:26:59 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:29:04.507 20:26:59 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:29:04.507 20:26:59 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:29:04.507 20:26:59 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:29:04.507 20:26:59 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:29:04.507 20:26:59 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:29:04.507 20:26:59 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:29:04.507 20:26:59 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:29:04.507 20:26:59 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:29:04.507 fio: verification read phase will never start because write phase uses all of runtime 00:29:04.507 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:29:04.507 fio-3.35 00:29:04.507 Starting 1 process 00:29:16.744 00:29:16.744 fio_test: (groupid=0, jobs=1): err= 0: pid=71864: Tue Oct 1 20:27:09 2024 00:29:16.744 write: IOPS=18.8k, BW=73.5MiB/s (77.1MB/s)(735MiB/10001msec); 0 zone resets 00:29:16.744 clat (usec): min=34, max=3956, avg=52.25, stdev=86.06 00:29:16.744 lat (usec): min=34, max=3957, avg=52.76, stdev=86.08 00:29:16.744 clat percentiles (usec): 00:29:16.744 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 45], 00:29:16.744 | 30.00th=[ 46], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 49], 00:29:16.744 | 70.00th=[ 51], 80.00th=[ 53], 90.00th=[ 58], 95.00th=[ 63], 00:29:16.744 | 99.00th=[ 75], 99.50th=[ 83], 99.90th=[ 1467], 99.95th=[ 2671], 00:29:16.744 | 99.99th=[ 3523] 00:29:16.744 bw ( KiB/s): min=65165, max=80112, per=99.88%, avg=75173.32, stdev=3059.71, samples=19 00:29:16.744 iops : min=16291, max=20028, avg=18793.32, stdev=764.97, samples=19 00:29:16.744 lat (usec) : 50=68.06%, 100=31.62%, 250=0.15%, 500=0.03%, 750=0.01% 00:29:16.744 lat (usec) : 1000=0.01% 00:29:16.744 lat (msec) : 2=0.05%, 4=0.07% 00:29:16.744 cpu : usr=3.55%, sys=15.70%, ctx=188183, majf=0, minf=796 00:29:16.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:16.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:16.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:16.744 issued rwts: total=0,188181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:16.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:16.744 00:29:16.744 Run status group 0 (all jobs): 00:29:16.744 WRITE: bw=73.5MiB/s (77.1MB/s), 73.5MiB/s-73.5MiB/s (77.1MB/s-77.1MB/s), io=735MiB (771MB), run=10001-10001msec 00:29:16.744 00:29:16.744 Disk stats (read/write): 00:29:16.744 ublkb0: ios=0/186277, merge=0/0, ticks=0/8070, in_queue=8070, util=99.08% 00:29:16.744 20:27:09 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:29:16.744 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.744 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.744 [2024-10-01 20:27:09.744701] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:29:16.744 [2024-10-01 20:27:09.786251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:16.744 [2024-10-01 20:27:09.787134] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:29:16.744 [2024-10-01 20:27:09.792745] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:16.744 [2024-10-01 20:27:09.792996] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:29:16.744 [2024-10-01 20:27:09.793013] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:29:16.744 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.744 20:27:09 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:29:16.744 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:29:16.744 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:29:16.744 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:16.744 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:16.744 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:16.744 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:16.744 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:29:16.744 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.744 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.744 [2024-10-01 20:27:09.808808] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:29:16.744 request: 00:29:16.744 { 00:29:16.744 "ublk_id": 0, 00:29:16.744 "method": "ublk_stop_disk", 00:29:16.744 "req_id": 1 00:29:16.744 } 00:29:16.744 Got JSON-RPC error response 00:29:16.744 response: 00:29:16.744 { 00:29:16.744 "code": -19, 00:29:16.744 "message": "No such device" 00:29:16.744 } 00:29:16.745 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:16.745 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:29:16.745 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:16.745 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:16.745 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:16.745 20:27:09 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:29:16.745 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.745 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 [2024-10-01 20:27:09.824815] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:29:16.745 [2024-10-01 20:27:09.826830] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:29:16.745 [2024-10-01 20:27:09.826877] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:29:16.745 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.745 20:27:09 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:16.745 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.745 20:27:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 20:27:10 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.745 20:27:10 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:29:16.745 20:27:10 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:29:16.745 20:27:10 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.745 20:27:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 20:27:10 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.745 20:27:10 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:29:16.745 20:27:10 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:29:16.745 20:27:10 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:29:16.745 20:27:10 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:29:16.745 20:27:10 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.745 20:27:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 20:27:10 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.745 20:27:10 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:29:16.745 20:27:10 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:29:16.745 20:27:10 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:29:16.745 00:29:16.745 real 0m11.173s 00:29:16.745 user 0m0.660s 00:29:16.745 sys 0m1.656s 00:29:16.745 20:27:10 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:16.745 ************************************ 00:29:16.745 END TEST test_create_ublk 00:29:16.745 20:27:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 ************************************ 00:29:16.745 20:27:10 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:29:16.745 20:27:10 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:16.745 20:27:10 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:16.745 20:27:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 ************************************ 00:29:16.745 START TEST test_create_multi_ublk 00:29:16.745 ************************************ 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 [2024-10-01 20:27:10.348710] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:29:16.745 [2024-10-01 20:27:10.349974] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 [2024-10-01 20:27:10.588841] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:29:16.745 [2024-10-01 20:27:10.589166] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:29:16.745 [2024-10-01 20:27:10.589177] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:29:16.745 [2024-10-01 20:27:10.589187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:29:16.745 [2024-10-01 20:27:10.612728] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:29:16.745 [2024-10-01 20:27:10.612760] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:29:16.745 [2024-10-01 20:27:10.624721] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:29:16.745 [2024-10-01 20:27:10.625288] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:29:16.745 [2024-10-01 20:27:10.664727] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 [2024-10-01 20:27:10.872833] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:29:16.745 [2024-10-01 20:27:10.873140] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:29:16.745 [2024-10-01 20:27:10.873149] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:29:16.745 [2024-10-01 20:27:10.873162] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:29:16.745 [2024-10-01 20:27:10.880742] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:29:16.745 [2024-10-01 20:27:10.880766] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:29:16.745 [2024-10-01 20:27:10.888726] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:29:16.745 [2024-10-01 20:27:10.889278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:29:16.745 [2024-10-01 20:27:10.892194] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.745 20:27:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.745 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:29:16.745 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:29:16.745 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.745 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 [2024-10-01 20:27:11.044842] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:29:16.745 [2024-10-01 20:27:11.045160] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:29:16.745 [2024-10-01 20:27:11.045171] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:29:16.745 [2024-10-01 20:27:11.045178] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:29:16.745 [2024-10-01 20:27:11.052759] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:29:16.745 [2024-10-01 20:27:11.052792] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:29:16.745 [2024-10-01 20:27:11.060743] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:29:16.745 [2024-10-01 20:27:11.061322] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:29:16.745 [2024-10-01 20:27:11.067780] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:29:16.745 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.745 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:29:16.745 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:16.745 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:29:16.745 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.745 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.745 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.745 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.746 [2024-10-01 20:27:11.232853] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:29:16.746 [2024-10-01 20:27:11.233180] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:29:16.746 [2024-10-01 20:27:11.233194] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:29:16.746 [2024-10-01 20:27:11.233200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:29:16.746 [2024-10-01 20:27:11.240742] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:29:16.746 [2024-10-01 20:27:11.240770] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:29:16.746 [2024-10-01 20:27:11.248739] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:29:16.746 [2024-10-01 20:27:11.249319] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:29:16.746 [2024-10-01 20:27:11.253255] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:29:16.746 { 00:29:16.746 "ublk_device": "/dev/ublkb0", 00:29:16.746 "id": 0, 00:29:16.746 "queue_depth": 512, 00:29:16.746 "num_queues": 4, 00:29:16.746 "bdev_name": "Malloc0" 00:29:16.746 }, 00:29:16.746 { 00:29:16.746 "ublk_device": "/dev/ublkb1", 00:29:16.746 "id": 1, 00:29:16.746 "queue_depth": 512, 00:29:16.746 "num_queues": 4, 00:29:16.746 "bdev_name": "Malloc1" 00:29:16.746 }, 00:29:16.746 { 00:29:16.746 "ublk_device": "/dev/ublkb2", 00:29:16.746 "id": 2, 00:29:16.746 "queue_depth": 512, 00:29:16.746 "num_queues": 4, 00:29:16.746 "bdev_name": "Malloc2" 00:29:16.746 }, 00:29:16.746 { 00:29:16.746 "ublk_device": "/dev/ublkb3", 00:29:16.746 "id": 3, 00:29:16.746 "queue_depth": 512, 00:29:16.746 "num_queues": 4, 00:29:16.746 "bdev_name": "Malloc3" 00:29:16.746 } 00:29:16.746 ]' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.746 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:16.746 [2024-10-01 20:27:11.916815] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:29:17.006 [2024-10-01 20:27:11.964748] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:17.006 [2024-10-01 20:27:11.965517] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:29:17.006 [2024-10-01 20:27:11.972735] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:17.006 [2024-10-01 20:27:11.972996] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:29:17.006 [2024-10-01 20:27:11.973005] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:29:17.006 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.006 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:17.006 20:27:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:29:17.006 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.006 20:27:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:17.006 [2024-10-01 20:27:11.988791] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:29:17.006 [2024-10-01 20:27:12.020210] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:17.006 [2024-10-01 20:27:12.021266] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:29:17.006 [2024-10-01 20:27:12.027733] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:17.006 [2024-10-01 20:27:12.027977] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:29:17.006 [2024-10-01 20:27:12.027990] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:29:17.006 20:27:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.006 20:27:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:17.006 20:27:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:29:17.006 20:27:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.006 20:27:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:17.006 [2024-10-01 20:27:12.042804] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:29:17.006 [2024-10-01 20:27:12.084739] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:17.006 [2024-10-01 20:27:12.085442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:29:17.006 [2024-10-01 20:27:12.090717] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:17.006 [2024-10-01 20:27:12.090986] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:29:17.006 [2024-10-01 20:27:12.091001] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:29:17.006 20:27:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.006 20:27:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:17.006 20:27:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:29:17.006 20:27:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.006 20:27:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:17.006 [2024-10-01 20:27:12.098816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:29:17.006 [2024-10-01 20:27:12.131194] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:17.006 [2024-10-01 20:27:12.132148] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:29:17.006 [2024-10-01 20:27:12.143760] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:17.006 [2024-10-01 20:27:12.144022] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:29:17.006 [2024-10-01 20:27:12.144036] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:29:17.006 20:27:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.006 20:27:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:29:17.265 [2024-10-01 20:27:12.366792] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:29:17.265 [2024-10-01 20:27:12.368669] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:29:17.265 [2024-10-01 20:27:12.368713] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:29:17.265 20:27:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:29:17.265 20:27:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:17.265 20:27:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:17.265 20:27:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.265 20:27:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:17.835 20:27:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.835 20:27:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:17.835 20:27:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:17.835 20:27:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.835 20:27:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:18.146 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.146 20:27:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:18.146 20:27:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:29:18.146 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.146 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:18.146 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.146 20:27:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:18.146 20:27:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:29:18.146 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.146 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:29:18.407 ************************************ 00:29:18.407 END TEST test_create_multi_ublk 00:29:18.407 ************************************ 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:29:18.407 00:29:18.407 real 0m3.261s 00:29:18.407 user 0m0.833s 00:29:18.407 sys 0m0.149s 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:18.407 20:27:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:18.667 20:27:13 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:18.667 20:27:13 ublk -- ublk/ublk.sh@147 -- # cleanup 00:29:18.667 20:27:13 ublk -- ublk/ublk.sh@130 -- # killprocess 71817 00:29:18.667 20:27:13 ublk -- common/autotest_common.sh@950 -- # '[' -z 71817 ']' 00:29:18.667 20:27:13 ublk -- common/autotest_common.sh@954 -- # kill -0 71817 00:29:18.667 20:27:13 ublk -- common/autotest_common.sh@955 -- # uname 00:29:18.667 20:27:13 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:18.667 20:27:13 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71817 00:29:18.667 killing process with pid 71817 00:29:18.667 20:27:13 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:18.667 20:27:13 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:18.667 20:27:13 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71817' 00:29:18.667 20:27:13 ublk -- common/autotest_common.sh@969 -- # kill 71817 00:29:18.667 20:27:13 ublk -- common/autotest_common.sh@974 -- # wait 71817 00:29:19.237 [2024-10-01 20:27:14.215989] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:29:19.237 [2024-10-01 20:27:14.216043] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:29:20.212 00:29:20.212 real 0m32.893s 00:29:20.212 user 0m35.636s 00:29:20.212 sys 0m10.408s 00:29:20.212 20:27:15 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.212 ************************************ 00:29:20.212 END TEST ublk 00:29:20.212 ************************************ 00:29:20.212 20:27:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:29:20.212 20:27:15 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:29:20.212 20:27:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:20.212 20:27:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.212 20:27:15 -- common/autotest_common.sh@10 -- # set +x 00:29:20.212 ************************************ 00:29:20.212 START TEST ublk_recovery 00:29:20.212 ************************************ 00:29:20.212 20:27:15 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:29:20.213 * Looking for test storage... 00:29:20.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:29:20.473 20:27:15 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:20.473 20:27:15 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:20.473 20:27:15 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:29:20.473 20:27:15 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:20.473 20:27:15 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.473 20:27:15 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.473 20:27:15 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.473 20:27:15 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.473 20:27:15 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.473 20:27:15 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.473 20:27:15 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.473 20:27:15 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.473 20:27:15 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.473 20:27:15 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.474 20:27:15 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:29:20.474 20:27:15 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.474 20:27:15 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:20.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.474 --rc genhtml_branch_coverage=1 00:29:20.474 --rc genhtml_function_coverage=1 00:29:20.474 --rc genhtml_legend=1 00:29:20.474 --rc geninfo_all_blocks=1 00:29:20.474 --rc geninfo_unexecuted_blocks=1 00:29:20.474 00:29:20.474 ' 00:29:20.474 20:27:15 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:20.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.474 --rc genhtml_branch_coverage=1 00:29:20.474 --rc genhtml_function_coverage=1 00:29:20.474 --rc genhtml_legend=1 00:29:20.474 --rc geninfo_all_blocks=1 00:29:20.474 --rc geninfo_unexecuted_blocks=1 00:29:20.474 00:29:20.474 ' 00:29:20.474 20:27:15 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:20.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.474 --rc genhtml_branch_coverage=1 00:29:20.474 --rc genhtml_function_coverage=1 00:29:20.474 --rc genhtml_legend=1 00:29:20.474 --rc geninfo_all_blocks=1 00:29:20.474 --rc geninfo_unexecuted_blocks=1 00:29:20.474 00:29:20.474 ' 00:29:20.474 20:27:15 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:20.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.474 --rc genhtml_branch_coverage=1 00:29:20.474 --rc genhtml_function_coverage=1 00:29:20.474 --rc genhtml_legend=1 00:29:20.474 --rc geninfo_all_blocks=1 00:29:20.474 --rc geninfo_unexecuted_blocks=1 00:29:20.474 00:29:20.474 ' 00:29:20.474 20:27:15 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:29:20.474 20:27:15 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:29:20.474 20:27:15 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:29:20.474 20:27:15 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:29:20.474 20:27:15 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:29:20.474 20:27:15 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:29:20.474 20:27:15 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:29:20.474 20:27:15 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:29:20.474 20:27:15 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:29:20.474 20:27:15 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:29:20.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.474 20:27:15 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=72215 00:29:20.474 20:27:15 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:20.474 20:27:15 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 72215 00:29:20.474 20:27:15 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 72215 ']' 00:29:20.474 20:27:15 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.474 20:27:15 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:29:20.474 20:27:15 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:20.474 20:27:15 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.474 20:27:15 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:20.474 20:27:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:20.474 [2024-10-01 20:27:15.573669] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:29:20.474 [2024-10-01 20:27:15.573814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72215 ] 00:29:20.735 [2024-10-01 20:27:15.721576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:20.735 [2024-10-01 20:27:15.879791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.735 [2024-10-01 20:27:15.880039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.676 20:27:16 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:21.676 20:27:16 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:29:21.676 20:27:16 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:29:21.676 20:27:16 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.676 20:27:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:21.676 [2024-10-01 20:27:16.544711] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:29:21.676 [2024-10-01 20:27:16.545973] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:29:21.676 20:27:16 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.676 20:27:16 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:29:21.676 20:27:16 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.676 20:27:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:21.676 malloc0 00:29:21.676 20:27:16 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.676 20:27:16 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:29:21.676 20:27:16 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.676 20:27:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:21.676 [2024-10-01 20:27:16.632849] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:29:21.676 [2024-10-01 20:27:16.632941] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:29:21.676 [2024-10-01 20:27:16.632951] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:29:21.676 [2024-10-01 20:27:16.632956] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:29:21.676 [2024-10-01 20:27:16.641806] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:29:21.676 [2024-10-01 20:27:16.641831] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:29:21.676 [2024-10-01 20:27:16.648724] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:29:21.676 [2024-10-01 20:27:16.648855] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:29:21.676 [2024-10-01 20:27:16.670756] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:29:21.676 1 00:29:21.676 20:27:16 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.676 20:27:16 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:29:22.617 20:27:17 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:29:22.617 20:27:17 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=72250 00:29:22.618 20:27:17 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:29:22.618 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:22.618 fio-3.35 00:29:22.618 Starting 1 process 00:29:27.913 20:27:22 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 72215 00:29:27.913 20:27:22 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:29:33.201 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 72215 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:29:33.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.201 20:27:27 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=72361 00:29:33.201 20:27:27 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:29:33.201 20:27:27 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:33.201 20:27:27 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 72361 00:29:33.201 20:27:27 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 72361 ']' 00:29:33.201 20:27:27 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.201 20:27:27 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:33.201 20:27:27 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.201 20:27:27 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:33.201 20:27:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.201 [2024-10-01 20:27:27.759644] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:29:33.201 [2024-10-01 20:27:27.759793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72361 ] 00:29:33.201 [2024-10-01 20:27:27.903552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:33.201 [2024-10-01 20:27:28.080644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.201 [2024-10-01 20:27:28.080646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.773 20:27:28 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:33.773 20:27:28 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:29:33.774 20:27:28 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:29:33.774 20:27:28 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.774 20:27:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.774 [2024-10-01 20:27:28.769721] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:29:33.774 [2024-10-01 20:27:28.771034] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:29:33.774 20:27:28 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.774 20:27:28 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:29:33.774 20:27:28 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.774 20:27:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.774 malloc0 00:29:33.774 20:27:28 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.774 20:27:28 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:29:33.774 20:27:28 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.774 20:27:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.774 [2024-10-01 20:27:28.857887] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:29:33.774 [2024-10-01 20:27:28.857940] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:29:33.774 [2024-10-01 20:27:28.857949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:29:33.774 [2024-10-01 20:27:28.863710] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:29:33.774 [2024-10-01 20:27:28.863746] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:29:33.774 1 00:29:33.774 20:27:28 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.774 20:27:28 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 72250 00:29:34.715 [2024-10-01 20:27:29.863788] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:29:34.715 [2024-10-01 20:27:29.868722] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:29:34.715 [2024-10-01 20:27:29.868756] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:29:35.666 [2024-10-01 20:27:30.868804] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:29:35.666 [2024-10-01 20:27:30.869719] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:29:35.666 [2024-10-01 20:27:30.869735] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:29:37.047 [2024-10-01 20:27:31.869770] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:29:37.047 [2024-10-01 20:27:31.873734] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:29:37.047 [2024-10-01 20:27:31.873748] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:29:37.047 [2024-10-01 20:27:31.873759] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:29:37.047 [2024-10-01 20:27:31.873864] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:29:58.960 [2024-10-01 20:27:53.202728] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:29:58.960 [2024-10-01 20:27:53.206200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:29:58.960 [2024-10-01 20:27:53.211960] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:29:58.960 [2024-10-01 20:27:53.211997] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:30:25.496 00:30:25.496 fio_test: (groupid=0, jobs=1): err= 0: pid=72253: Tue Oct 1 20:28:17 2024 00:30:25.496 read: IOPS=13.5k, BW=52.9MiB/s (55.4MB/s)(3172MiB/60004msec) 00:30:25.496 slat (nsec): min=893, max=340159, avg=5220.72, stdev=2071.28 00:30:25.496 clat (usec): min=724, max=30535k, avg=4538.49, stdev=262453.01 00:30:25.496 lat (usec): min=730, max=30535k, avg=4543.71, stdev=262453.00 00:30:25.496 clat percentiles (usec): 00:30:25.496 | 1.00th=[ 1745], 5.00th=[ 1926], 10.00th=[ 1975], 20.00th=[ 2024], 00:30:25.496 | 30.00th=[ 2057], 40.00th=[ 2089], 50.00th=[ 2114], 60.00th=[ 2147], 00:30:25.496 | 70.00th=[ 2180], 80.00th=[ 2278], 90.00th=[ 2671], 95.00th=[ 3359], 00:30:25.496 | 99.00th=[ 5342], 99.50th=[ 5800], 99.90th=[ 7635], 99.95th=[11469], 00:30:25.496 | 99.99th=[13435] 00:30:25.496 bw ( KiB/s): min=21840, max=121144, per=100.00%, avg=108438.51, stdev=16405.04, samples=59 00:30:25.496 iops : min= 5460, max=30286, avg=27109.61, stdev=4101.26, samples=59 00:30:25.496 write: IOPS=13.5k, BW=52.8MiB/s (55.4MB/s)(3168MiB/60004msec); 0 zone resets 00:30:25.496 slat (nsec): min=946, max=737407, avg=5296.98, stdev=2329.49 00:30:25.496 clat (usec): min=711, max=30535k, avg=4914.53, stdev=279555.22 00:30:25.496 lat (usec): min=717, max=30535k, avg=4919.83, stdev=279555.21 00:30:25.496 clat percentiles (usec): 00:30:25.496 | 1.00th=[ 1778], 5.00th=[ 2008], 10.00th=[ 2057], 20.00th=[ 2114], 00:30:25.496 | 30.00th=[ 2147], 40.00th=[ 2180], 50.00th=[ 2212], 60.00th=[ 2245], 00:30:25.496 | 70.00th=[ 2278], 80.00th=[ 2376], 90.00th=[ 2704], 95.00th=[ 3294], 00:30:25.496 | 99.00th=[ 5342], 99.50th=[ 5866], 99.90th=[ 7635], 99.95th=[ 8979], 00:30:25.496 | 99.99th=[13435] 00:30:25.496 bw ( KiB/s): min=21240, max=121288, per=100.00%, avg=108270.36, stdev=16462.07, samples=59 00:30:25.496 iops : min= 5310, max=30322, avg=27067.58, stdev=4115.51, samples=59 00:30:25.496 lat (usec) : 750=0.01%, 1000=0.01% 00:30:25.496 lat (msec) : 2=9.97%, 4=87.00%, 10=2.97%, 20=0.04%, >=2000=0.01% 00:30:25.496 cpu : usr=3.40%, sys=14.65%, ctx=57342, majf=0, minf=14 00:30:25.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:30:25.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:25.496 issued rwts: total=811953,811058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:25.496 00:30:25.496 Run status group 0 (all jobs): 00:30:25.496 READ: bw=52.9MiB/s (55.4MB/s), 52.9MiB/s-52.9MiB/s (55.4MB/s-55.4MB/s), io=3172MiB (3326MB), run=60004-60004msec 00:30:25.496 WRITE: bw=52.8MiB/s (55.4MB/s), 52.8MiB/s-52.8MiB/s (55.4MB/s-55.4MB/s), io=3168MiB (3322MB), run=60004-60004msec 00:30:25.496 00:30:25.496 Disk stats (read/write): 00:30:25.496 ublkb1: ios=809038/807950, merge=0/0, ticks=3628050/3862563, in_queue=7490613, util=99.92% 00:30:25.496 20:28:17 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:30:25.496 20:28:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.496 20:28:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.496 [2024-10-01 20:28:17.944657] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:30:25.496 [2024-10-01 20:28:17.982873] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:30:25.496 [2024-10-01 20:28:17.983083] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:30:25.496 [2024-10-01 20:28:17.990748] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:30:25.496 [2024-10-01 20:28:17.990962] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:30:25.496 [2024-10-01 20:28:17.991024] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:30:25.496 20:28:17 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.496 20:28:17 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:30:25.496 20:28:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.496 20:28:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.496 [2024-10-01 20:28:18.005839] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:30:25.496 [2024-10-01 20:28:18.008213] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:30:25.496 [2024-10-01 20:28:18.008267] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:30:25.496 20:28:18 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.496 20:28:18 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:25.496 20:28:18 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:30:25.496 20:28:18 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 72361 00:30:25.496 20:28:18 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 72361 ']' 00:30:25.496 20:28:18 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 72361 00:30:25.496 20:28:18 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:30:25.496 20:28:18 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:25.496 20:28:18 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72361 00:30:25.496 killing process with pid 72361 00:30:25.496 20:28:18 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:25.496 20:28:18 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:25.496 20:28:18 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72361' 00:30:25.496 20:28:18 ublk_recovery -- common/autotest_common.sh@969 -- # kill 72361 00:30:25.496 20:28:18 ublk_recovery -- common/autotest_common.sh@974 -- # wait 72361 00:30:25.496 [2024-10-01 20:28:19.251748] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:30:25.496 [2024-10-01 20:28:19.251829] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:30:25.496 ************************************ 00:30:25.496 END TEST ublk_recovery 00:30:25.496 ************************************ 00:30:25.496 00:30:25.496 real 1m5.289s 00:30:25.496 user 1m48.471s 00:30:25.496 sys 0m22.077s 00:30:25.497 20:28:20 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:25.497 20:28:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.497 20:28:20 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:30:25.497 20:28:20 -- spdk/autotest.sh@256 -- # timing_exit lib 00:30:25.497 20:28:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:25.497 20:28:20 -- common/autotest_common.sh@10 -- # set +x 00:30:25.756 20:28:20 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:30:25.756 20:28:20 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:30:25.756 20:28:20 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:30:25.756 20:28:20 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:30:25.756 20:28:20 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:30:25.756 20:28:20 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:30:25.756 20:28:20 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:30:25.756 20:28:20 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:30:25.756 20:28:20 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:30:25.756 20:28:20 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:30:25.756 20:28:20 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:30:25.756 20:28:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:25.756 20:28:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:25.756 20:28:20 -- common/autotest_common.sh@10 -- # set +x 00:30:25.756 ************************************ 00:30:25.756 START TEST ftl 00:30:25.756 ************************************ 00:30:25.756 20:28:20 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:30:25.756 * Looking for test storage... 00:30:25.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:25.756 20:28:20 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:25.756 20:28:20 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:30:25.756 20:28:20 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:25.756 20:28:20 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:25.756 20:28:20 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.756 20:28:20 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.756 20:28:20 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.756 20:28:20 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.756 20:28:20 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.756 20:28:20 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.756 20:28:20 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.756 20:28:20 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.756 20:28:20 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.756 20:28:20 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.756 20:28:20 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.756 20:28:20 ftl -- scripts/common.sh@344 -- # case "$op" in 00:30:25.756 20:28:20 ftl -- scripts/common.sh@345 -- # : 1 00:30:25.756 20:28:20 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.756 20:28:20 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.756 20:28:20 ftl -- scripts/common.sh@365 -- # decimal 1 00:30:25.756 20:28:20 ftl -- scripts/common.sh@353 -- # local d=1 00:30:25.756 20:28:20 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.756 20:28:20 ftl -- scripts/common.sh@355 -- # echo 1 00:30:25.756 20:28:20 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.756 20:28:20 ftl -- scripts/common.sh@366 -- # decimal 2 00:30:25.756 20:28:20 ftl -- scripts/common.sh@353 -- # local d=2 00:30:25.756 20:28:20 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.756 20:28:20 ftl -- scripts/common.sh@355 -- # echo 2 00:30:25.756 20:28:20 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.756 20:28:20 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.756 20:28:20 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.756 20:28:20 ftl -- scripts/common.sh@368 -- # return 0 00:30:25.756 20:28:20 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.756 20:28:20 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:25.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.756 --rc genhtml_branch_coverage=1 00:30:25.756 --rc genhtml_function_coverage=1 00:30:25.756 --rc genhtml_legend=1 00:30:25.756 --rc geninfo_all_blocks=1 00:30:25.756 --rc geninfo_unexecuted_blocks=1 00:30:25.756 00:30:25.756 ' 00:30:25.756 20:28:20 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:25.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.756 --rc genhtml_branch_coverage=1 00:30:25.756 --rc genhtml_function_coverage=1 00:30:25.756 --rc genhtml_legend=1 00:30:25.756 --rc geninfo_all_blocks=1 00:30:25.756 --rc geninfo_unexecuted_blocks=1 00:30:25.756 00:30:25.756 ' 00:30:25.756 20:28:20 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:25.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.756 --rc genhtml_branch_coverage=1 00:30:25.756 --rc genhtml_function_coverage=1 00:30:25.756 --rc genhtml_legend=1 00:30:25.756 --rc geninfo_all_blocks=1 00:30:25.756 --rc geninfo_unexecuted_blocks=1 00:30:25.756 00:30:25.756 ' 00:30:25.757 20:28:20 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:25.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.757 --rc genhtml_branch_coverage=1 00:30:25.757 --rc genhtml_function_coverage=1 00:30:25.757 --rc genhtml_legend=1 00:30:25.757 --rc geninfo_all_blocks=1 00:30:25.757 --rc geninfo_unexecuted_blocks=1 00:30:25.757 00:30:25.757 ' 00:30:25.757 20:28:20 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:25.757 20:28:20 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:30:25.757 20:28:20 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:25.757 20:28:20 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:25.757 20:28:20 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:25.757 20:28:20 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:25.757 20:28:20 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:25.757 20:28:20 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:25.757 20:28:20 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:25.757 20:28:20 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.757 20:28:20 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.757 20:28:20 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:25.757 20:28:20 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:25.757 20:28:20 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:25.757 20:28:20 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:25.757 20:28:20 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:25.757 20:28:20 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:25.757 20:28:20 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.757 20:28:20 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.757 20:28:20 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:25.757 20:28:20 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:25.757 20:28:20 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:25.757 20:28:20 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:25.757 20:28:20 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:25.757 20:28:20 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:25.757 20:28:20 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:25.757 20:28:20 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:25.757 20:28:20 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:25.757 20:28:20 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:25.757 20:28:20 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:25.757 20:28:20 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:30:25.757 20:28:20 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:30:25.757 20:28:20 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:30:25.757 20:28:20 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:30:25.757 20:28:20 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:26.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:26.016 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:26.016 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:26.016 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:26.016 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:26.275 20:28:21 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=73179 00:30:26.275 20:28:21 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:30:26.275 20:28:21 ftl -- ftl/ftl.sh@38 -- # waitforlisten 73179 00:30:26.275 20:28:21 ftl -- common/autotest_common.sh@831 -- # '[' -z 73179 ']' 00:30:26.275 20:28:21 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.275 20:28:21 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:26.275 20:28:21 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.275 20:28:21 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:26.275 20:28:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:26.275 [2024-10-01 20:28:21.336980] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:30:26.275 [2024-10-01 20:28:21.337376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73179 ] 00:30:26.534 [2024-10-01 20:28:21.506414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.534 [2024-10-01 20:28:21.669489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.107 20:28:22 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:27.107 20:28:22 ftl -- common/autotest_common.sh@864 -- # return 0 00:30:27.107 20:28:22 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:30:27.107 20:28:22 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:30:28.112 20:28:23 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:30:28.112 20:28:23 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:28.679 20:28:23 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:30:28.679 20:28:23 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:30:28.679 20:28:23 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:30:28.679 20:28:23 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:30:28.679 20:28:23 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:30:28.679 20:28:23 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:30:28.679 20:28:23 ftl -- ftl/ftl.sh@50 -- # break 00:30:28.679 20:28:23 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:30:28.679 20:28:23 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:30:28.679 20:28:23 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:30:28.679 20:28:23 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:30:28.937 20:28:23 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:30:28.937 20:28:23 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:30:28.937 20:28:23 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:30:28.937 20:28:23 ftl -- ftl/ftl.sh@63 -- # break 00:30:28.937 20:28:23 ftl -- ftl/ftl.sh@66 -- # killprocess 73179 00:30:28.937 20:28:23 ftl -- common/autotest_common.sh@950 -- # '[' -z 73179 ']' 00:30:28.937 20:28:23 ftl -- common/autotest_common.sh@954 -- # kill -0 73179 00:30:28.937 20:28:23 ftl -- common/autotest_common.sh@955 -- # uname 00:30:28.937 20:28:23 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:28.937 20:28:23 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73179 00:30:28.937 killing process with pid 73179 00:30:28.937 20:28:23 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:28.937 20:28:23 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:28.937 20:28:23 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73179' 00:30:28.937 20:28:23 ftl -- common/autotest_common.sh@969 -- # kill 73179 00:30:28.937 20:28:23 ftl -- common/autotest_common.sh@974 -- # wait 73179 00:30:30.836 20:28:25 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:30:30.836 20:28:25 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:30:30.836 20:28:25 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:30.836 20:28:25 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:30.836 20:28:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:30.836 ************************************ 00:30:30.836 START TEST ftl_fio_basic 00:30:30.836 ************************************ 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:30:30.836 * Looking for test storage... 00:30:30.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:30.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.836 --rc genhtml_branch_coverage=1 00:30:30.836 --rc genhtml_function_coverage=1 00:30:30.836 --rc genhtml_legend=1 00:30:30.836 --rc geninfo_all_blocks=1 00:30:30.836 --rc geninfo_unexecuted_blocks=1 00:30:30.836 00:30:30.836 ' 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:30.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.836 --rc genhtml_branch_coverage=1 00:30:30.836 --rc genhtml_function_coverage=1 00:30:30.836 --rc genhtml_legend=1 00:30:30.836 --rc geninfo_all_blocks=1 00:30:30.836 --rc geninfo_unexecuted_blocks=1 00:30:30.836 00:30:30.836 ' 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:30.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.836 --rc genhtml_branch_coverage=1 00:30:30.836 --rc genhtml_function_coverage=1 00:30:30.836 --rc genhtml_legend=1 00:30:30.836 --rc geninfo_all_blocks=1 00:30:30.836 --rc geninfo_unexecuted_blocks=1 00:30:30.836 00:30:30.836 ' 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:30.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.836 --rc genhtml_branch_coverage=1 00:30:30.836 --rc genhtml_function_coverage=1 00:30:30.836 --rc genhtml_legend=1 00:30:30.836 --rc geninfo_all_blocks=1 00:30:30.836 --rc geninfo_unexecuted_blocks=1 00:30:30.836 00:30:30.836 ' 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:30.836 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=73312 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 73312 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 73312 ']' 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:30.837 20:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:30.837 [2024-10-01 20:28:25.847654] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:30:30.837 [2024-10-01 20:28:25.847993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73312 ] 00:30:30.837 [2024-10-01 20:28:26.001800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:31.096 [2024-10-01 20:28:26.167515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.096 [2024-10-01 20:28:26.167583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.096 [2024-10-01 20:28:26.167597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:31.661 20:28:26 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:31.661 20:28:26 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:30:31.661 20:28:26 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:31.661 20:28:26 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:30:31.661 20:28:26 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:31.661 20:28:26 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:30:31.661 20:28:26 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:30:31.661 20:28:26 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:32.226 { 00:30:32.226 "name": "nvme0n1", 00:30:32.226 "aliases": [ 00:30:32.226 "8183fc68-7488-4ad8-902c-567592866386" 00:30:32.226 ], 00:30:32.226 "product_name": "NVMe disk", 00:30:32.226 "block_size": 4096, 00:30:32.226 "num_blocks": 1310720, 00:30:32.226 "uuid": "8183fc68-7488-4ad8-902c-567592866386", 00:30:32.226 "numa_id": -1, 00:30:32.226 "assigned_rate_limits": { 00:30:32.226 "rw_ios_per_sec": 0, 00:30:32.226 "rw_mbytes_per_sec": 0, 00:30:32.226 "r_mbytes_per_sec": 0, 00:30:32.226 "w_mbytes_per_sec": 0 00:30:32.226 }, 00:30:32.226 "claimed": false, 00:30:32.226 "zoned": false, 00:30:32.226 "supported_io_types": { 00:30:32.226 "read": true, 00:30:32.226 "write": true, 00:30:32.226 "unmap": true, 00:30:32.226 "flush": true, 00:30:32.226 "reset": true, 00:30:32.226 "nvme_admin": true, 00:30:32.226 "nvme_io": true, 00:30:32.226 "nvme_io_md": false, 00:30:32.226 "write_zeroes": true, 00:30:32.226 "zcopy": false, 00:30:32.226 "get_zone_info": false, 00:30:32.226 "zone_management": false, 00:30:32.226 "zone_append": false, 00:30:32.226 "compare": true, 00:30:32.226 "compare_and_write": false, 00:30:32.226 "abort": true, 00:30:32.226 "seek_hole": false, 00:30:32.226 "seek_data": false, 00:30:32.226 "copy": true, 00:30:32.226 "nvme_iov_md": false 00:30:32.226 }, 00:30:32.226 "driver_specific": { 00:30:32.226 "nvme": [ 00:30:32.226 { 00:30:32.226 "pci_address": "0000:00:11.0", 00:30:32.226 "trid": { 00:30:32.226 "trtype": "PCIe", 00:30:32.226 "traddr": "0000:00:11.0" 00:30:32.226 }, 00:30:32.226 "ctrlr_data": { 00:30:32.226 "cntlid": 0, 00:30:32.226 "vendor_id": "0x1b36", 00:30:32.226 "model_number": "QEMU NVMe Ctrl", 00:30:32.226 "serial_number": "12341", 00:30:32.226 "firmware_revision": "8.0.0", 00:30:32.226 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:32.226 "oacs": { 00:30:32.226 "security": 0, 00:30:32.226 "format": 1, 00:30:32.226 "firmware": 0, 00:30:32.226 "ns_manage": 1 00:30:32.226 }, 00:30:32.226 "multi_ctrlr": false, 00:30:32.226 "ana_reporting": false 00:30:32.226 }, 00:30:32.226 "vs": { 00:30:32.226 "nvme_version": "1.4" 00:30:32.226 }, 00:30:32.226 "ns_data": { 00:30:32.226 "id": 1, 00:30:32.226 "can_share": false 00:30:32.226 } 00:30:32.226 } 00:30:32.226 ], 00:30:32.226 "mp_policy": "active_passive" 00:30:32.226 } 00:30:32.226 } 00:30:32.226 ]' 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:30:32.226 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:32.227 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:30:32.227 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:32.227 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:32.484 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:30:32.484 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:32.741 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=c4daf069-23bc-4604-871d-3454e2158b2a 00:30:32.741 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c4daf069-23bc-4604-871d-3454e2158b2a 00:30:32.741 20:28:27 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=ced35c05-619d-4b64-b858-58087e9bc5d5 00:30:32.998 20:28:27 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ced35c05-619d-4b64-b858-58087e9bc5d5 00:30:32.998 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:30:32.998 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:32.998 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=ced35c05-619d-4b64-b858-58087e9bc5d5 00:30:32.998 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:30:32.998 20:28:27 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size ced35c05-619d-4b64-b858-58087e9bc5d5 00:30:32.998 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ced35c05-619d-4b64-b858-58087e9bc5d5 00:30:32.998 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:32.998 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:30:32.998 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:30:32.998 20:28:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ced35c05-619d-4b64-b858-58087e9bc5d5 00:30:32.998 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:32.998 { 00:30:32.998 "name": "ced35c05-619d-4b64-b858-58087e9bc5d5", 00:30:32.998 "aliases": [ 00:30:32.998 "lvs/nvme0n1p0" 00:30:32.998 ], 00:30:32.998 "product_name": "Logical Volume", 00:30:32.998 "block_size": 4096, 00:30:32.998 "num_blocks": 26476544, 00:30:32.998 "uuid": "ced35c05-619d-4b64-b858-58087e9bc5d5", 00:30:32.998 "assigned_rate_limits": { 00:30:32.998 "rw_ios_per_sec": 0, 00:30:32.998 "rw_mbytes_per_sec": 0, 00:30:32.998 "r_mbytes_per_sec": 0, 00:30:32.998 "w_mbytes_per_sec": 0 00:30:32.998 }, 00:30:32.998 "claimed": false, 00:30:32.998 "zoned": false, 00:30:32.998 "supported_io_types": { 00:30:32.998 "read": true, 00:30:32.998 "write": true, 00:30:32.998 "unmap": true, 00:30:32.998 "flush": false, 00:30:32.998 "reset": true, 00:30:32.998 "nvme_admin": false, 00:30:32.998 "nvme_io": false, 00:30:32.998 "nvme_io_md": false, 00:30:32.998 "write_zeroes": true, 00:30:32.998 "zcopy": false, 00:30:32.998 "get_zone_info": false, 00:30:32.998 "zone_management": false, 00:30:32.998 "zone_append": false, 00:30:32.998 "compare": false, 00:30:32.998 "compare_and_write": false, 00:30:32.998 "abort": false, 00:30:32.998 "seek_hole": true, 00:30:32.998 "seek_data": true, 00:30:32.998 "copy": false, 00:30:32.998 "nvme_iov_md": false 00:30:32.998 }, 00:30:32.998 "driver_specific": { 00:30:32.998 "lvol": { 00:30:32.998 "lvol_store_uuid": "c4daf069-23bc-4604-871d-3454e2158b2a", 00:30:32.998 "base_bdev": "nvme0n1", 00:30:32.998 "thin_provision": true, 00:30:32.998 "num_allocated_clusters": 0, 00:30:32.998 "snapshot": false, 00:30:32.998 "clone": false, 00:30:32.998 "esnap_clone": false 00:30:32.998 } 00:30:32.998 } 00:30:32.998 } 00:30:32.998 ]' 00:30:32.998 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:32.998 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:30:32.998 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:32.998 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:32.998 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:32.998 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:30:32.998 20:28:28 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:30:32.998 20:28:28 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:30:32.999 20:28:28 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:33.256 20:28:28 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:33.256 20:28:28 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:33.256 20:28:28 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size ced35c05-619d-4b64-b858-58087e9bc5d5 00:30:33.256 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ced35c05-619d-4b64-b858-58087e9bc5d5 00:30:33.256 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:33.256 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:30:33.256 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:30:33.256 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ced35c05-619d-4b64-b858-58087e9bc5d5 00:30:33.513 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:33.513 { 00:30:33.513 "name": "ced35c05-619d-4b64-b858-58087e9bc5d5", 00:30:33.513 "aliases": [ 00:30:33.513 "lvs/nvme0n1p0" 00:30:33.513 ], 00:30:33.513 "product_name": "Logical Volume", 00:30:33.513 "block_size": 4096, 00:30:33.513 "num_blocks": 26476544, 00:30:33.513 "uuid": "ced35c05-619d-4b64-b858-58087e9bc5d5", 00:30:33.513 "assigned_rate_limits": { 00:30:33.513 "rw_ios_per_sec": 0, 00:30:33.513 "rw_mbytes_per_sec": 0, 00:30:33.513 "r_mbytes_per_sec": 0, 00:30:33.513 "w_mbytes_per_sec": 0 00:30:33.513 }, 00:30:33.513 "claimed": false, 00:30:33.513 "zoned": false, 00:30:33.513 "supported_io_types": { 00:30:33.513 "read": true, 00:30:33.513 "write": true, 00:30:33.513 "unmap": true, 00:30:33.513 "flush": false, 00:30:33.513 "reset": true, 00:30:33.513 "nvme_admin": false, 00:30:33.513 "nvme_io": false, 00:30:33.513 "nvme_io_md": false, 00:30:33.513 "write_zeroes": true, 00:30:33.513 "zcopy": false, 00:30:33.513 "get_zone_info": false, 00:30:33.513 "zone_management": false, 00:30:33.513 "zone_append": false, 00:30:33.513 "compare": false, 00:30:33.513 "compare_and_write": false, 00:30:33.513 "abort": false, 00:30:33.513 "seek_hole": true, 00:30:33.513 "seek_data": true, 00:30:33.513 "copy": false, 00:30:33.513 "nvme_iov_md": false 00:30:33.513 }, 00:30:33.513 "driver_specific": { 00:30:33.513 "lvol": { 00:30:33.513 "lvol_store_uuid": "c4daf069-23bc-4604-871d-3454e2158b2a", 00:30:33.513 "base_bdev": "nvme0n1", 00:30:33.513 "thin_provision": true, 00:30:33.513 "num_allocated_clusters": 0, 00:30:33.513 "snapshot": false, 00:30:33.513 "clone": false, 00:30:33.513 "esnap_clone": false 00:30:33.513 } 00:30:33.513 } 00:30:33.513 } 00:30:33.513 ]' 00:30:33.513 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:33.513 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:30:33.513 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:33.513 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:33.513 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:33.513 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:30:33.513 20:28:28 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:30:33.513 20:28:28 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:33.770 20:28:28 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:30:33.771 20:28:28 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:30:33.771 20:28:28 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:30:33.771 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:30:33.771 20:28:28 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size ced35c05-619d-4b64-b858-58087e9bc5d5 00:30:33.771 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ced35c05-619d-4b64-b858-58087e9bc5d5 00:30:33.771 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:33.771 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:30:33.771 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:30:33.771 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ced35c05-619d-4b64-b858-58087e9bc5d5 00:30:33.771 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:33.771 { 00:30:33.771 "name": "ced35c05-619d-4b64-b858-58087e9bc5d5", 00:30:33.771 "aliases": [ 00:30:33.771 "lvs/nvme0n1p0" 00:30:33.771 ], 00:30:33.771 "product_name": "Logical Volume", 00:30:33.771 "block_size": 4096, 00:30:33.771 "num_blocks": 26476544, 00:30:33.771 "uuid": "ced35c05-619d-4b64-b858-58087e9bc5d5", 00:30:33.771 "assigned_rate_limits": { 00:30:33.771 "rw_ios_per_sec": 0, 00:30:33.771 "rw_mbytes_per_sec": 0, 00:30:33.771 "r_mbytes_per_sec": 0, 00:30:33.771 "w_mbytes_per_sec": 0 00:30:33.771 }, 00:30:33.771 "claimed": false, 00:30:33.771 "zoned": false, 00:30:33.771 "supported_io_types": { 00:30:33.771 "read": true, 00:30:33.771 "write": true, 00:30:33.771 "unmap": true, 00:30:33.771 "flush": false, 00:30:33.771 "reset": true, 00:30:33.771 "nvme_admin": false, 00:30:33.771 "nvme_io": false, 00:30:33.771 "nvme_io_md": false, 00:30:33.771 "write_zeroes": true, 00:30:33.771 "zcopy": false, 00:30:33.771 "get_zone_info": false, 00:30:33.771 "zone_management": false, 00:30:33.771 "zone_append": false, 00:30:33.771 "compare": false, 00:30:33.771 "compare_and_write": false, 00:30:33.771 "abort": false, 00:30:33.771 "seek_hole": true, 00:30:33.771 "seek_data": true, 00:30:33.771 "copy": false, 00:30:33.771 "nvme_iov_md": false 00:30:33.771 }, 00:30:33.771 "driver_specific": { 00:30:33.771 "lvol": { 00:30:33.771 "lvol_store_uuid": "c4daf069-23bc-4604-871d-3454e2158b2a", 00:30:33.771 "base_bdev": "nvme0n1", 00:30:33.771 "thin_provision": true, 00:30:33.771 "num_allocated_clusters": 0, 00:30:33.771 "snapshot": false, 00:30:33.771 "clone": false, 00:30:33.771 "esnap_clone": false 00:30:33.771 } 00:30:33.771 } 00:30:33.771 } 00:30:33.771 ]' 00:30:33.771 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:34.029 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:30:34.029 20:28:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:34.029 20:28:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:34.029 20:28:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:34.029 20:28:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:30:34.029 20:28:29 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:30:34.029 20:28:29 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:30:34.029 20:28:29 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ced35c05-619d-4b64-b858-58087e9bc5d5 -c nvc0n1p0 --l2p_dram_limit 60 00:30:34.029 [2024-10-01 20:28:29.172450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.029 [2024-10-01 20:28:29.172498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:34.029 [2024-10-01 20:28:29.172511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:34.029 [2024-10-01 20:28:29.172518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.029 [2024-10-01 20:28:29.172572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.029 [2024-10-01 20:28:29.172581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:34.029 [2024-10-01 20:28:29.172589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:34.029 [2024-10-01 20:28:29.172603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.029 [2024-10-01 20:28:29.172632] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:34.029 [2024-10-01 20:28:29.173281] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:34.029 [2024-10-01 20:28:29.173309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.029 [2024-10-01 20:28:29.173316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:34.029 [2024-10-01 20:28:29.173324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:30:34.029 [2024-10-01 20:28:29.173332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.029 [2024-10-01 20:28:29.173423] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 016fcec1-9118-4aef-89e7-3aa7561aea62 00:30:34.029 [2024-10-01 20:28:29.174526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.029 [2024-10-01 20:28:29.174552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:34.029 [2024-10-01 20:28:29.174561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:34.029 [2024-10-01 20:28:29.174568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.029 [2024-10-01 20:28:29.180331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.029 [2024-10-01 20:28:29.180369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:34.029 [2024-10-01 20:28:29.180378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.717 ms 00:30:34.029 [2024-10-01 20:28:29.180387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.029 [2024-10-01 20:28:29.180479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.029 [2024-10-01 20:28:29.180488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:34.029 [2024-10-01 20:28:29.180495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:30:34.029 [2024-10-01 20:28:29.180507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.029 [2024-10-01 20:28:29.180549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.029 [2024-10-01 20:28:29.180558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:34.029 [2024-10-01 20:28:29.180565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:34.029 [2024-10-01 20:28:29.180572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.029 [2024-10-01 20:28:29.180614] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:34.029 [2024-10-01 20:28:29.184046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.029 [2024-10-01 20:28:29.184074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:34.029 [2024-10-01 20:28:29.184086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.453 ms 00:30:34.029 [2024-10-01 20:28:29.184092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.029 [2024-10-01 20:28:29.184129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.029 [2024-10-01 20:28:29.184135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:34.029 [2024-10-01 20:28:29.184143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:34.029 [2024-10-01 20:28:29.184151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.029 [2024-10-01 20:28:29.184188] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:34.029 [2024-10-01 20:28:29.184306] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:34.029 [2024-10-01 20:28:29.184321] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:34.029 [2024-10-01 20:28:29.184331] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:34.029 [2024-10-01 20:28:29.184342] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:34.029 [2024-10-01 20:28:29.184349] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:34.029 [2024-10-01 20:28:29.184357] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:34.029 [2024-10-01 20:28:29.184364] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:34.029 [2024-10-01 20:28:29.184371] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:34.029 [2024-10-01 20:28:29.184377] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:34.029 [2024-10-01 20:28:29.184384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.029 [2024-10-01 20:28:29.184391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:34.029 [2024-10-01 20:28:29.184398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:30:34.029 [2024-10-01 20:28:29.184404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.029 [2024-10-01 20:28:29.184473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.029 [2024-10-01 20:28:29.184482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:34.029 [2024-10-01 20:28:29.184490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:34.029 [2024-10-01 20:28:29.184495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.029 [2024-10-01 20:28:29.184608] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:34.029 [2024-10-01 20:28:29.184616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:34.029 [2024-10-01 20:28:29.184624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:34.029 [2024-10-01 20:28:29.184631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:34.029 [2024-10-01 20:28:29.184640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:34.029 [2024-10-01 20:28:29.184646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:34.029 [2024-10-01 20:28:29.184653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:34.029 [2024-10-01 20:28:29.184659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:34.029 [2024-10-01 20:28:29.184665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:34.029 [2024-10-01 20:28:29.184671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:34.029 [2024-10-01 20:28:29.184677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:34.029 [2024-10-01 20:28:29.184683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:34.029 [2024-10-01 20:28:29.184698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:34.029 [2024-10-01 20:28:29.184704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:34.029 [2024-10-01 20:28:29.184711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:34.029 [2024-10-01 20:28:29.184716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:34.029 [2024-10-01 20:28:29.184724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:34.029 [2024-10-01 20:28:29.184730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:34.030 [2024-10-01 20:28:29.184738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:34.030 [2024-10-01 20:28:29.184744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:34.030 [2024-10-01 20:28:29.184751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:34.030 [2024-10-01 20:28:29.184757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:34.030 [2024-10-01 20:28:29.184763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:34.030 [2024-10-01 20:28:29.184769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:34.030 [2024-10-01 20:28:29.184775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:34.030 [2024-10-01 20:28:29.184781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:34.030 [2024-10-01 20:28:29.184787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:34.030 [2024-10-01 20:28:29.184792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:34.030 [2024-10-01 20:28:29.184799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:34.030 [2024-10-01 20:28:29.184804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:34.030 [2024-10-01 20:28:29.184811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:34.030 [2024-10-01 20:28:29.184816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:34.030 [2024-10-01 20:28:29.184824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:34.030 [2024-10-01 20:28:29.184830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:34.030 [2024-10-01 20:28:29.184836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:34.030 [2024-10-01 20:28:29.184842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:34.030 [2024-10-01 20:28:29.184852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:34.030 [2024-10-01 20:28:29.184857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:34.030 [2024-10-01 20:28:29.184864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:34.030 [2024-10-01 20:28:29.184877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:34.030 [2024-10-01 20:28:29.184884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:34.030 [2024-10-01 20:28:29.184889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:34.030 [2024-10-01 20:28:29.184895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:34.030 [2024-10-01 20:28:29.184900] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:34.030 [2024-10-01 20:28:29.184909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:34.030 [2024-10-01 20:28:29.184917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:34.030 [2024-10-01 20:28:29.184924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:34.030 [2024-10-01 20:28:29.184930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:34.030 [2024-10-01 20:28:29.184938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:34.030 [2024-10-01 20:28:29.184943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:34.030 [2024-10-01 20:28:29.184950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:34.030 [2024-10-01 20:28:29.184955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:34.030 [2024-10-01 20:28:29.184962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:34.030 [2024-10-01 20:28:29.184970] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:34.030 [2024-10-01 20:28:29.184979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:34.030 [2024-10-01 20:28:29.184986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:34.030 [2024-10-01 20:28:29.184993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:34.030 [2024-10-01 20:28:29.184999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:34.030 [2024-10-01 20:28:29.185006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:34.030 [2024-10-01 20:28:29.185012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:34.030 [2024-10-01 20:28:29.185019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:34.030 [2024-10-01 20:28:29.185024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:34.030 [2024-10-01 20:28:29.185031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:34.030 [2024-10-01 20:28:29.185037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:34.030 [2024-10-01 20:28:29.185045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:34.030 [2024-10-01 20:28:29.185050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:34.030 [2024-10-01 20:28:29.185059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:34.030 [2024-10-01 20:28:29.185064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:34.030 [2024-10-01 20:28:29.185074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:34.030 [2024-10-01 20:28:29.185080] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:34.030 [2024-10-01 20:28:29.185088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:34.030 [2024-10-01 20:28:29.185094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:34.030 [2024-10-01 20:28:29.185101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:34.030 [2024-10-01 20:28:29.185107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:34.030 [2024-10-01 20:28:29.185114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:34.030 [2024-10-01 20:28:29.185120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.030 [2024-10-01 20:28:29.185128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:34.030 [2024-10-01 20:28:29.185134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:30:34.030 [2024-10-01 20:28:29.185141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.030 [2024-10-01 20:28:29.185181] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:34.030 [2024-10-01 20:28:29.185191] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:36.556 [2024-10-01 20:28:31.522822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.522899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:36.556 [2024-10-01 20:28:31.522913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2337.629 ms 00:30:36.556 [2024-10-01 20:28:31.522923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.556 [2024-10-01 20:28:31.547114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.547165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:36.556 [2024-10-01 20:28:31.547177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.971 ms 00:30:36.556 [2024-10-01 20:28:31.547188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.556 [2024-10-01 20:28:31.547318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.547329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:36.556 [2024-10-01 20:28:31.547338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:30:36.556 [2024-10-01 20:28:31.547348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.556 [2024-10-01 20:28:31.573548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.573926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:36.556 [2024-10-01 20:28:31.573979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.164 ms 00:30:36.556 [2024-10-01 20:28:31.574025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.556 [2024-10-01 20:28:31.574075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.574087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:36.556 [2024-10-01 20:28:31.574095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:36.556 [2024-10-01 20:28:31.574103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.556 [2024-10-01 20:28:31.574502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.574519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:36.556 [2024-10-01 20:28:31.574527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:30:36.556 [2024-10-01 20:28:31.574535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.556 [2024-10-01 20:28:31.574668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.574676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:36.556 [2024-10-01 20:28:31.574684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:30:36.556 [2024-10-01 20:28:31.574704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.556 [2024-10-01 20:28:31.592223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.592291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:36.556 [2024-10-01 20:28:31.592310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.493 ms 00:30:36.556 [2024-10-01 20:28:31.592322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.556 [2024-10-01 20:28:31.601972] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:30:36.556 [2024-10-01 20:28:31.616282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.616359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:36.556 [2024-10-01 20:28:31.616382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.814 ms 00:30:36.556 [2024-10-01 20:28:31.616394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.556 [2024-10-01 20:28:31.673498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.673746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:36.556 [2024-10-01 20:28:31.673777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.037 ms 00:30:36.556 [2024-10-01 20:28:31.673790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.556 [2024-10-01 20:28:31.674017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.674036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:36.556 [2024-10-01 20:28:31.674054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:30:36.556 [2024-10-01 20:28:31.674065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.556 [2024-10-01 20:28:31.705746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.705821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:36.556 [2024-10-01 20:28:31.705842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.581 ms 00:30:36.556 [2024-10-01 20:28:31.705853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.556 [2024-10-01 20:28:31.737673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.737779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:36.556 [2024-10-01 20:28:31.737802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.721 ms 00:30:36.556 [2024-10-01 20:28:31.737814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.556 [2024-10-01 20:28:31.738618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.556 [2024-10-01 20:28:31.738659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:36.556 [2024-10-01 20:28:31.738677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:30:36.556 [2024-10-01 20:28:31.738708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.815 [2024-10-01 20:28:31.824587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.815 [2024-10-01 20:28:31.824682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:36.815 [2024-10-01 20:28:31.824732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.760 ms 00:30:36.815 [2024-10-01 20:28:31.824744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.815 [2024-10-01 20:28:31.862654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.815 [2024-10-01 20:28:31.862754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:36.815 [2024-10-01 20:28:31.862779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.717 ms 00:30:36.815 [2024-10-01 20:28:31.862792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.815 [2024-10-01 20:28:31.884876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.815 [2024-10-01 20:28:31.885096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:36.815 [2024-10-01 20:28:31.885118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.958 ms 00:30:36.815 [2024-10-01 20:28:31.885125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.815 [2024-10-01 20:28:31.905749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.815 [2024-10-01 20:28:31.905802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:36.815 [2024-10-01 20:28:31.905815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.571 ms 00:30:36.815 [2024-10-01 20:28:31.905822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.815 [2024-10-01 20:28:31.905888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.815 [2024-10-01 20:28:31.905896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:36.815 [2024-10-01 20:28:31.905908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:36.815 [2024-10-01 20:28:31.905914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.815 [2024-10-01 20:28:31.906003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.815 [2024-10-01 20:28:31.906011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:36.815 [2024-10-01 20:28:31.906021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:36.815 [2024-10-01 20:28:31.906027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.815 [2024-10-01 20:28:31.906866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2734.019 ms, result 0 00:30:36.815 { 00:30:36.815 "name": "ftl0", 00:30:36.815 "uuid": "016fcec1-9118-4aef-89e7-3aa7561aea62" 00:30:36.815 } 00:30:36.815 20:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:30:36.815 20:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:30:36.815 20:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:36.815 20:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:30:36.815 20:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:36.815 20:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:36.815 20:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:37.073 20:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:30:37.073 [ 00:30:37.073 { 00:30:37.073 "name": "ftl0", 00:30:37.073 "aliases": [ 00:30:37.073 "016fcec1-9118-4aef-89e7-3aa7561aea62" 00:30:37.073 ], 00:30:37.073 "product_name": "FTL disk", 00:30:37.073 "block_size": 4096, 00:30:37.073 "num_blocks": 20971520, 00:30:37.073 "uuid": "016fcec1-9118-4aef-89e7-3aa7561aea62", 00:30:37.073 "assigned_rate_limits": { 00:30:37.073 "rw_ios_per_sec": 0, 00:30:37.073 "rw_mbytes_per_sec": 0, 00:30:37.073 "r_mbytes_per_sec": 0, 00:30:37.073 "w_mbytes_per_sec": 0 00:30:37.073 }, 00:30:37.073 "claimed": false, 00:30:37.073 "zoned": false, 00:30:37.073 "supported_io_types": { 00:30:37.073 "read": true, 00:30:37.073 "write": true, 00:30:37.073 "unmap": true, 00:30:37.073 "flush": true, 00:30:37.073 "reset": false, 00:30:37.073 "nvme_admin": false, 00:30:37.073 "nvme_io": false, 00:30:37.073 "nvme_io_md": false, 00:30:37.073 "write_zeroes": true, 00:30:37.073 "zcopy": false, 00:30:37.073 "get_zone_info": false, 00:30:37.073 "zone_management": false, 00:30:37.073 "zone_append": false, 00:30:37.073 "compare": false, 00:30:37.073 "compare_and_write": false, 00:30:37.073 "abort": false, 00:30:37.073 "seek_hole": false, 00:30:37.073 "seek_data": false, 00:30:37.073 "copy": false, 00:30:37.073 "nvme_iov_md": false 00:30:37.073 }, 00:30:37.073 "driver_specific": { 00:30:37.073 "ftl": { 00:30:37.073 "base_bdev": "ced35c05-619d-4b64-b858-58087e9bc5d5", 00:30:37.073 "cache": "nvc0n1p0" 00:30:37.073 } 00:30:37.073 } 00:30:37.073 } 00:30:37.073 ] 00:30:37.073 20:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:30:37.073 20:28:32 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:30:37.073 20:28:32 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:37.330 20:28:32 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:30:37.330 20:28:32 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:37.589 [2024-10-01 20:28:32.575194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.589 [2024-10-01 20:28:32.575244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:37.589 [2024-10-01 20:28:32.575255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:37.589 [2024-10-01 20:28:32.575265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.589 [2024-10-01 20:28:32.575294] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:37.589 [2024-10-01 20:28:32.577615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.589 [2024-10-01 20:28:32.577656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:37.589 [2024-10-01 20:28:32.577668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.302 ms 00:30:37.589 [2024-10-01 20:28:32.577677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.589 [2024-10-01 20:28:32.578035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.589 [2024-10-01 20:28:32.578047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:37.589 [2024-10-01 20:28:32.578056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:30:37.589 [2024-10-01 20:28:32.578063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.589 [2024-10-01 20:28:32.580649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.589 [2024-10-01 20:28:32.580674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:37.589 [2024-10-01 20:28:32.580683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.565 ms 00:30:37.589 [2024-10-01 20:28:32.580696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.589 [2024-10-01 20:28:32.585550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.589 [2024-10-01 20:28:32.585593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:37.589 [2024-10-01 20:28:32.585609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.823 ms 00:30:37.589 [2024-10-01 20:28:32.585616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.589 [2024-10-01 20:28:32.606229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.589 [2024-10-01 20:28:32.606284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:37.589 [2024-10-01 20:28:32.606297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.485 ms 00:30:37.589 [2024-10-01 20:28:32.606303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.589 [2024-10-01 20:28:32.619103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.589 [2024-10-01 20:28:32.619327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:37.589 [2024-10-01 20:28:32.619350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.727 ms 00:30:37.589 [2024-10-01 20:28:32.619358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.589 [2024-10-01 20:28:32.619539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.589 [2024-10-01 20:28:32.619549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:37.590 [2024-10-01 20:28:32.619560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:30:37.590 [2024-10-01 20:28:32.619566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.590 [2024-10-01 20:28:32.639616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.590 [2024-10-01 20:28:32.639671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:37.590 [2024-10-01 20:28:32.639683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.024 ms 00:30:37.590 [2024-10-01 20:28:32.639703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.590 [2024-10-01 20:28:32.660169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.590 [2024-10-01 20:28:32.660219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:37.590 [2024-10-01 20:28:32.660231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.393 ms 00:30:37.590 [2024-10-01 20:28:32.660238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.590 [2024-10-01 20:28:32.678979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.590 [2024-10-01 20:28:32.679026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:37.590 [2024-10-01 20:28:32.679037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.670 ms 00:30:37.590 [2024-10-01 20:28:32.679044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.590 [2024-10-01 20:28:32.697562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.590 [2024-10-01 20:28:32.697604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:37.590 [2024-10-01 20:28:32.697615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.385 ms 00:30:37.590 [2024-10-01 20:28:32.697622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.590 [2024-10-01 20:28:32.697676] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:37.590 [2024-10-01 20:28:32.697703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.697994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:37.590 [2024-10-01 20:28:32.698195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:37.591 [2024-10-01 20:28:32.698428] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:37.591 [2024-10-01 20:28:32.698436] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 016fcec1-9118-4aef-89e7-3aa7561aea62 00:30:37.591 [2024-10-01 20:28:32.698442] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:37.591 [2024-10-01 20:28:32.698451] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:37.591 [2024-10-01 20:28:32.698457] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:37.591 [2024-10-01 20:28:32.698465] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:37.591 [2024-10-01 20:28:32.698471] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:37.591 [2024-10-01 20:28:32.698478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:37.591 [2024-10-01 20:28:32.698484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:37.591 [2024-10-01 20:28:32.698490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:37.591 [2024-10-01 20:28:32.698496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:37.591 [2024-10-01 20:28:32.698503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.591 [2024-10-01 20:28:32.698511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:37.591 [2024-10-01 20:28:32.698519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:30:37.591 [2024-10-01 20:28:32.698525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.591 [2024-10-01 20:28:32.708801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.591 [2024-10-01 20:28:32.708838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:37.591 [2024-10-01 20:28:32.708851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.221 ms 00:30:37.591 [2024-10-01 20:28:32.708858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.591 [2024-10-01 20:28:32.709154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.591 [2024-10-01 20:28:32.709164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:37.591 [2024-10-01 20:28:32.709173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:30:37.591 [2024-10-01 20:28:32.709178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.591 [2024-10-01 20:28:32.744803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.591 [2024-10-01 20:28:32.744852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:37.591 [2024-10-01 20:28:32.744864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.591 [2024-10-01 20:28:32.744871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.591 [2024-10-01 20:28:32.744934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.591 [2024-10-01 20:28:32.744940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:37.591 [2024-10-01 20:28:32.744948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.591 [2024-10-01 20:28:32.744955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.591 [2024-10-01 20:28:32.745043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.591 [2024-10-01 20:28:32.745051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:37.591 [2024-10-01 20:28:32.745060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.591 [2024-10-01 20:28:32.745067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.591 [2024-10-01 20:28:32.745090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.591 [2024-10-01 20:28:32.745098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:37.591 [2024-10-01 20:28:32.745105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.591 [2024-10-01 20:28:32.745111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.848 [2024-10-01 20:28:32.812480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.848 [2024-10-01 20:28:32.812526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:37.848 [2024-10-01 20:28:32.812538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.848 [2024-10-01 20:28:32.812545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.848 [2024-10-01 20:28:32.864532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.848 [2024-10-01 20:28:32.864578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:37.848 [2024-10-01 20:28:32.864588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.848 [2024-10-01 20:28:32.864603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.848 [2024-10-01 20:28:32.864702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.848 [2024-10-01 20:28:32.864711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:37.848 [2024-10-01 20:28:32.864719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.848 [2024-10-01 20:28:32.864725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.848 [2024-10-01 20:28:32.864773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.848 [2024-10-01 20:28:32.864781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:37.848 [2024-10-01 20:28:32.864790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.848 [2024-10-01 20:28:32.864796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.848 [2024-10-01 20:28:32.864887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.848 [2024-10-01 20:28:32.864895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:37.848 [2024-10-01 20:28:32.864903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.848 [2024-10-01 20:28:32.864909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.848 [2024-10-01 20:28:32.864946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.849 [2024-10-01 20:28:32.864953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:37.849 [2024-10-01 20:28:32.864961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.849 [2024-10-01 20:28:32.864968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.849 [2024-10-01 20:28:32.865007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.849 [2024-10-01 20:28:32.865013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:37.849 [2024-10-01 20:28:32.865020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.849 [2024-10-01 20:28:32.865026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.849 [2024-10-01 20:28:32.865069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:37.849 [2024-10-01 20:28:32.865076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:37.849 [2024-10-01 20:28:32.865086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:37.849 [2024-10-01 20:28:32.865091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.849 [2024-10-01 20:28:32.865215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 290.009 ms, result 0 00:30:37.849 true 00:30:37.849 20:28:32 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 73312 00:30:37.849 20:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 73312 ']' 00:30:37.849 20:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 73312 00:30:37.849 20:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:30:37.849 20:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:37.849 20:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73312 00:30:37.849 20:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:37.849 killing process with pid 73312 00:30:37.849 20:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:37.849 20:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73312' 00:30:37.849 20:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 73312 00:30:37.849 20:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 73312 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:45.949 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:46.208 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:46.208 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:46.208 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:30:46.208 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:46.208 20:28:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:30:46.208 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:30:46.208 fio-3.35 00:30:46.208 Starting 1 thread 00:30:50.548 00:30:50.548 test: (groupid=0, jobs=1): err= 0: pid=73504: Tue Oct 1 20:28:45 2024 00:30:50.548 read: IOPS=1378, BW=91.6MiB/s (96.0MB/s)(255MiB/2780msec) 00:30:50.548 slat (nsec): min=3066, max=59821, avg=4775.10, stdev=2767.66 00:30:50.548 clat (usec): min=221, max=1275, avg=327.89, stdev=49.97 00:30:50.548 lat (usec): min=237, max=1279, avg=332.67, stdev=50.77 00:30:50.548 clat percentiles (usec): 00:30:50.548 | 1.00th=[ 243], 5.00th=[ 289], 10.00th=[ 302], 20.00th=[ 306], 00:30:50.548 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 326], 00:30:50.548 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 371], 95.00th=[ 416], 00:30:50.548 | 99.00th=[ 529], 99.50th=[ 570], 99.90th=[ 693], 99.95th=[ 832], 00:30:50.548 | 99.99th=[ 1270] 00:30:50.548 write: IOPS=1388, BW=92.2MiB/s (96.7MB/s)(256MiB/2777msec); 0 zone resets 00:30:50.548 slat (nsec): min=13805, max=59103, avg=18012.07, stdev=4265.46 00:30:50.548 clat (usec): min=273, max=876, avg=360.35, stdev=62.60 00:30:50.548 lat (usec): min=292, max=893, avg=378.36, stdev=63.01 00:30:50.548 clat percentiles (usec): 00:30:50.548 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 326], 00:30:50.548 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:30:50.548 | 70.00th=[ 359], 80.00th=[ 383], 90.00th=[ 416], 95.00th=[ 474], 00:30:50.548 | 99.00th=[ 660], 99.50th=[ 709], 99.90th=[ 840], 99.95th=[ 857], 00:30:50.548 | 99.99th=[ 873] 00:30:50.548 bw ( KiB/s): min=91392, max=96288, per=99.64%, avg=94084.80, stdev=1800.14, samples=5 00:30:50.548 iops : min= 1344, max= 1416, avg=1383.60, stdev=26.47, samples=5 00:30:50.548 lat (usec) : 250=1.07%, 500=96.35%, 750=2.41%, 1000=0.17% 00:30:50.548 lat (msec) : 2=0.01% 00:30:50.548 cpu : usr=99.17%, sys=0.11%, ctx=3, majf=0, minf=1169 00:30:50.549 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.549 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.549 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:50.549 00:30:50.549 Run status group 0 (all jobs): 00:30:50.549 READ: bw=91.6MiB/s (96.0MB/s), 91.6MiB/s-91.6MiB/s (96.0MB/s-96.0MB/s), io=255MiB (267MB), run=2780-2780msec 00:30:50.549 WRITE: bw=92.2MiB/s (96.7MB/s), 92.2MiB/s-92.2MiB/s (96.7MB/s-96.7MB/s), io=256MiB (269MB), run=2777-2777msec 00:30:52.448 ----------------------------------------------------- 00:30:52.448 Suppressions used: 00:30:52.448 count bytes template 00:30:52.448 1 5 /usr/src/fio/parse.c 00:30:52.448 1 8 libtcmalloc_minimal.so 00:30:52.448 1 904 libcrypto.so 00:30:52.448 ----------------------------------------------------- 00:30:52.448 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:52.448 20:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:30:52.448 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:30:52.448 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:30:52.448 fio-3.35 00:30:52.448 Starting 2 threads 00:31:18.973 00:31:18.973 first_half: (groupid=0, jobs=1): err= 0: pid=73596: Tue Oct 1 20:29:12 2024 00:31:18.973 read: IOPS=2788, BW=10.9MiB/s (11.4MB/s)(255MiB/23381msec) 00:31:18.973 slat (nsec): min=3058, max=33768, avg=4174.88, stdev=1050.00 00:31:18.973 clat (usec): min=677, max=280365, avg=33791.88, stdev=16463.23 00:31:18.973 lat (usec): min=681, max=280370, avg=33796.05, stdev=16463.24 00:31:18.973 clat percentiles (msec): 00:31:18.973 | 1.00th=[ 5], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 31], 00:31:18.973 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:31:18.973 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 41], 00:31:18.974 | 99.00th=[ 123], 99.50th=[ 148], 99.90th=[ 215], 99.95th=[ 239], 00:31:18.974 | 99.99th=[ 275] 00:31:18.974 write: IOPS=3870, BW=15.1MiB/s (15.9MB/s)(256MiB/16932msec); 0 zone resets 00:31:18.974 slat (usec): min=3, max=928, avg= 6.53, stdev= 6.71 00:31:18.974 clat (usec): min=353, max=93327, avg=12031.40, stdev=21563.08 00:31:18.974 lat (usec): min=358, max=93333, avg=12037.93, stdev=21563.25 00:31:18.974 clat percentiles (usec): 00:31:18.974 | 1.00th=[ 725], 5.00th=[ 906], 10.00th=[ 1029], 20.00th=[ 1205], 00:31:18.974 | 30.00th=[ 1401], 40.00th=[ 1958], 50.00th=[ 3687], 60.00th=[ 5342], 00:31:18.974 | 70.00th=[ 6587], 80.00th=[11338], 90.00th=[61080], 95.00th=[70779], 00:31:18.974 | 99.00th=[80217], 99.50th=[82314], 99.90th=[87557], 99.95th=[90702], 00:31:18.974 | 99.99th=[92799] 00:31:18.974 bw ( KiB/s): min= 960, max=48992, per=85.40%, avg=23827.27, stdev=15798.02, samples=22 00:31:18.974 iops : min= 240, max=12248, avg=5956.82, stdev=3949.51, samples=22 00:31:18.974 lat (usec) : 500=0.02%, 750=0.63%, 1000=3.72% 00:31:18.974 lat (msec) : 2=16.05%, 4=5.74%, 10=13.65%, 20=5.57%, 50=47.21% 00:31:18.974 lat (msec) : 100=6.53%, 250=0.86%, 500=0.02% 00:31:18.974 cpu : usr=99.14%, sys=0.15%, ctx=51, majf=0, minf=5549 00:31:18.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:18.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.974 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:18.974 issued rwts: total=65196,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:18.974 second_half: (groupid=0, jobs=1): err= 0: pid=73597: Tue Oct 1 20:29:12 2024 00:31:18.974 read: IOPS=2773, BW=10.8MiB/s (11.4MB/s)(255MiB/23509msec) 00:31:18.974 slat (nsec): min=3060, max=58750, avg=4353.66, stdev=1197.90 00:31:18.974 clat (usec): min=586, max=283768, avg=33049.55, stdev=15482.39 00:31:18.974 lat (usec): min=591, max=283773, avg=33053.91, stdev=15482.49 00:31:18.974 clat percentiles (msec): 00:31:18.974 | 1.00th=[ 7], 5.00th=[ 25], 10.00th=[ 28], 20.00th=[ 31], 00:31:18.974 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:31:18.974 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 41], 00:31:18.974 | 99.00th=[ 120], 99.50th=[ 150], 99.90th=[ 180], 99.95th=[ 194], 00:31:18.974 | 99.99th=[ 279] 00:31:18.974 write: IOPS=3487, BW=13.6MiB/s (14.3MB/s)(256MiB/18791msec); 0 zone resets 00:31:18.974 slat (usec): min=3, max=764, avg= 7.14, stdev= 5.41 00:31:18.974 clat (usec): min=332, max=93876, avg=13011.65, stdev=21795.86 00:31:18.974 lat (usec): min=347, max=93883, avg=13018.79, stdev=21796.07 00:31:18.974 clat percentiles (usec): 00:31:18.974 | 1.00th=[ 676], 5.00th=[ 832], 10.00th=[ 988], 20.00th=[ 1205], 00:31:18.974 | 30.00th=[ 1483], 40.00th=[ 3064], 50.00th=[ 4359], 60.00th=[ 5538], 00:31:18.974 | 70.00th=[ 8848], 80.00th=[12911], 90.00th=[61604], 95.00th=[71828], 00:31:18.974 | 99.00th=[80217], 99.50th=[83362], 99.90th=[89654], 99.95th=[90702], 00:31:18.974 | 99.99th=[92799] 00:31:18.974 bw ( KiB/s): min= 352, max=41136, per=72.27%, avg=20164.58, stdev=11974.09, samples=26 00:31:18.974 iops : min= 88, max=10284, avg=5041.12, stdev=2993.49, samples=26 00:31:18.974 lat (usec) : 500=0.02%, 750=1.30%, 1000=4.01% 00:31:18.974 lat (msec) : 2=12.78%, 4=5.31%, 10=14.99%, 20=6.66%, 50=47.58% 00:31:18.974 lat (msec) : 100=6.66%, 250=0.68%, 500=0.01% 00:31:18.974 cpu : usr=99.35%, sys=0.13%, ctx=33, majf=0, minf=5552 00:31:18.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:18.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.974 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:18.974 issued rwts: total=65212,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:18.974 00:31:18.974 Run status group 0 (all jobs): 00:31:18.974 READ: bw=21.7MiB/s (22.7MB/s), 10.8MiB/s-10.9MiB/s (11.4MB/s-11.4MB/s), io=509MiB (534MB), run=23381-23509msec 00:31:18.974 WRITE: bw=27.2MiB/s (28.6MB/s), 13.6MiB/s-15.1MiB/s (14.3MB/s-15.9MB/s), io=512MiB (537MB), run=16932-18791msec 00:31:19.231 ----------------------------------------------------- 00:31:19.231 Suppressions used: 00:31:19.231 count bytes template 00:31:19.231 2 10 /usr/src/fio/parse.c 00:31:19.231 1 96 /usr/src/fio/iolog.c 00:31:19.231 1 8 libtcmalloc_minimal.so 00:31:19.231 1 904 libcrypto.so 00:31:19.231 ----------------------------------------------------- 00:31:19.231 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:19.231 20:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:31:19.488 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:31:19.488 fio-3.35 00:31:19.488 Starting 1 thread 00:31:34.384 00:31:34.384 test: (groupid=0, jobs=1): err= 0: pid=73915: Tue Oct 1 20:29:28 2024 00:31:34.384 read: IOPS=7966, BW=31.1MiB/s (32.6MB/s)(255MiB/8185msec) 00:31:34.384 slat (nsec): min=3049, max=25501, avg=3662.15, stdev=778.15 00:31:34.384 clat (usec): min=518, max=33474, avg=16060.84, stdev=1857.07 00:31:34.384 lat (usec): min=530, max=33477, avg=16064.50, stdev=1857.09 00:31:34.384 clat percentiles (usec): 00:31:34.384 | 1.00th=[13698], 5.00th=[13960], 10.00th=[14484], 20.00th=[15139], 00:31:34.384 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15795], 60.00th=[15926], 00:31:34.384 | 70.00th=[16057], 80.00th=[16319], 90.00th=[17695], 95.00th=[20317], 00:31:34.384 | 99.00th=[23200], 99.50th=[24249], 99.90th=[26346], 99.95th=[28443], 00:31:34.384 | 99.99th=[32113] 00:31:34.384 write: IOPS=15.7k, BW=61.2MiB/s (64.2MB/s)(256MiB/4183msec); 0 zone resets 00:31:34.384 slat (usec): min=4, max=255, avg= 7.59, stdev= 3.62 00:31:34.384 clat (usec): min=468, max=62114, avg=8124.14, stdev=10250.71 00:31:34.384 lat (usec): min=474, max=62120, avg=8131.73, stdev=10250.57 00:31:34.384 clat percentiles (usec): 00:31:34.384 | 1.00th=[ 635], 5.00th=[ 758], 10.00th=[ 865], 20.00th=[ 1004], 00:31:34.384 | 30.00th=[ 1156], 40.00th=[ 1582], 50.00th=[ 5014], 60.00th=[ 6194], 00:31:34.384 | 70.00th=[ 7504], 80.00th=[ 8848], 90.00th=[28967], 95.00th=[31589], 00:31:34.384 | 99.00th=[37487], 99.50th=[39060], 99.90th=[46400], 99.95th=[50594], 00:31:34.384 | 99.99th=[60031] 00:31:34.384 bw ( KiB/s): min=15760, max=93104, per=92.96%, avg=58254.22, stdev=20099.49, samples=9 00:31:34.384 iops : min= 3940, max=23276, avg=14563.56, stdev=5024.87, samples=9 00:31:34.384 lat (usec) : 500=0.01%, 750=2.35%, 1000=7.43% 00:31:34.384 lat (msec) : 2=10.82%, 4=0.96%, 10=19.97%, 20=47.83%, 50=10.60% 00:31:34.384 lat (msec) : 100=0.03% 00:31:34.384 cpu : usr=99.04%, sys=0.23%, ctx=23, majf=0, minf=5565 00:31:34.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:34.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.384 complete : 0=0.0%, 4=99.8%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:34.384 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:34.384 00:31:34.384 Run status group 0 (all jobs): 00:31:34.384 READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=255MiB (267MB), run=8185-8185msec 00:31:34.384 WRITE: bw=61.2MiB/s (64.2MB/s), 61.2MiB/s-61.2MiB/s (64.2MB/s-64.2MB/s), io=256MiB (268MB), run=4183-4183msec 00:31:35.414 ----------------------------------------------------- 00:31:35.414 Suppressions used: 00:31:35.414 count bytes template 00:31:35.414 1 5 /usr/src/fio/parse.c 00:31:35.414 2 192 /usr/src/fio/iolog.c 00:31:35.414 1 8 libtcmalloc_minimal.so 00:31:35.415 1 904 libcrypto.so 00:31:35.415 ----------------------------------------------------- 00:31:35.415 00:31:35.415 20:29:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:31:35.415 20:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:35.415 20:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:35.415 20:29:30 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:35.415 20:29:30 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:31:35.415 Remove shared memory files 00:31:35.415 20:29:30 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:35.415 20:29:30 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:31:35.415 20:29:30 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:31:35.415 20:29:30 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57444 /dev/shm/spdk_tgt_trace.pid72215 00:31:35.415 20:29:30 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:35.415 20:29:30 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:31:35.415 00:31:35.415 real 1m4.725s 00:31:35.415 user 2m7.617s 00:31:35.415 sys 0m17.247s 00:31:35.415 20:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:35.415 20:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:35.415 ************************************ 00:31:35.415 END TEST ftl_fio_basic 00:31:35.415 ************************************ 00:31:35.415 20:29:30 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:31:35.415 20:29:30 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:35.415 20:29:30 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:35.415 20:29:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:35.415 ************************************ 00:31:35.415 START TEST ftl_bdevperf 00:31:35.415 ************************************ 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:31:35.415 * Looking for test storage... 00:31:35.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:35.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.415 --rc genhtml_branch_coverage=1 00:31:35.415 --rc genhtml_function_coverage=1 00:31:35.415 --rc genhtml_legend=1 00:31:35.415 --rc geninfo_all_blocks=1 00:31:35.415 --rc geninfo_unexecuted_blocks=1 00:31:35.415 00:31:35.415 ' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:35.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.415 --rc genhtml_branch_coverage=1 00:31:35.415 --rc genhtml_function_coverage=1 00:31:35.415 --rc genhtml_legend=1 00:31:35.415 --rc geninfo_all_blocks=1 00:31:35.415 --rc geninfo_unexecuted_blocks=1 00:31:35.415 00:31:35.415 ' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:35.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.415 --rc genhtml_branch_coverage=1 00:31:35.415 --rc genhtml_function_coverage=1 00:31:35.415 --rc genhtml_legend=1 00:31:35.415 --rc geninfo_all_blocks=1 00:31:35.415 --rc geninfo_unexecuted_blocks=1 00:31:35.415 00:31:35.415 ' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:35.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.415 --rc genhtml_branch_coverage=1 00:31:35.415 --rc genhtml_function_coverage=1 00:31:35.415 --rc genhtml_legend=1 00:31:35.415 --rc geninfo_all_blocks=1 00:31:35.415 --rc geninfo_unexecuted_blocks=1 00:31:35.415 00:31:35.415 ' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=74142 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 74142 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 74142 ']' 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.415 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:35.416 20:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:35.416 20:29:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:31:35.416 [2024-10-01 20:29:30.626242] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:31:35.673 [2024-10-01 20:29:30.627141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74142 ] 00:31:35.673 [2024-10-01 20:29:30.775271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.931 [2024-10-01 20:29:30.975354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.497 20:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:36.497 20:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:31:36.497 20:29:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:36.497 20:29:31 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:31:36.497 20:29:31 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:36.497 20:29:31 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:31:36.497 20:29:31 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:31:36.497 20:29:31 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:36.755 20:29:31 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:36.755 20:29:31 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:31:36.755 20:29:31 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:36.755 20:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:31:36.755 20:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:36.755 20:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:31:36.755 20:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:31:36.755 20:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:37.013 20:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:37.013 { 00:31:37.013 "name": "nvme0n1", 00:31:37.013 "aliases": [ 00:31:37.013 "550d2362-258d-4ba0-a964-55d506b7603c" 00:31:37.013 ], 00:31:37.013 "product_name": "NVMe disk", 00:31:37.013 "block_size": 4096, 00:31:37.013 "num_blocks": 1310720, 00:31:37.013 "uuid": "550d2362-258d-4ba0-a964-55d506b7603c", 00:31:37.013 "numa_id": -1, 00:31:37.013 "assigned_rate_limits": { 00:31:37.013 "rw_ios_per_sec": 0, 00:31:37.013 "rw_mbytes_per_sec": 0, 00:31:37.013 "r_mbytes_per_sec": 0, 00:31:37.013 "w_mbytes_per_sec": 0 00:31:37.013 }, 00:31:37.013 "claimed": true, 00:31:37.013 "claim_type": "read_many_write_one", 00:31:37.013 "zoned": false, 00:31:37.013 "supported_io_types": { 00:31:37.013 "read": true, 00:31:37.013 "write": true, 00:31:37.013 "unmap": true, 00:31:37.013 "flush": true, 00:31:37.013 "reset": true, 00:31:37.013 "nvme_admin": true, 00:31:37.013 "nvme_io": true, 00:31:37.013 "nvme_io_md": false, 00:31:37.013 "write_zeroes": true, 00:31:37.013 "zcopy": false, 00:31:37.013 "get_zone_info": false, 00:31:37.013 "zone_management": false, 00:31:37.013 "zone_append": false, 00:31:37.013 "compare": true, 00:31:37.013 "compare_and_write": false, 00:31:37.013 "abort": true, 00:31:37.013 "seek_hole": false, 00:31:37.013 "seek_data": false, 00:31:37.013 "copy": true, 00:31:37.013 "nvme_iov_md": false 00:31:37.013 }, 00:31:37.013 "driver_specific": { 00:31:37.013 "nvme": [ 00:31:37.013 { 00:31:37.013 "pci_address": "0000:00:11.0", 00:31:37.013 "trid": { 00:31:37.013 "trtype": "PCIe", 00:31:37.013 "traddr": "0000:00:11.0" 00:31:37.013 }, 00:31:37.013 "ctrlr_data": { 00:31:37.013 "cntlid": 0, 00:31:37.013 "vendor_id": "0x1b36", 00:31:37.013 "model_number": "QEMU NVMe Ctrl", 00:31:37.013 "serial_number": "12341", 00:31:37.013 "firmware_revision": "8.0.0", 00:31:37.013 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:37.013 "oacs": { 00:31:37.013 "security": 0, 00:31:37.013 "format": 1, 00:31:37.013 "firmware": 0, 00:31:37.013 "ns_manage": 1 00:31:37.013 }, 00:31:37.014 "multi_ctrlr": false, 00:31:37.014 "ana_reporting": false 00:31:37.014 }, 00:31:37.014 "vs": { 00:31:37.014 "nvme_version": "1.4" 00:31:37.014 }, 00:31:37.014 "ns_data": { 00:31:37.014 "id": 1, 00:31:37.014 "can_share": false 00:31:37.014 } 00:31:37.014 } 00:31:37.014 ], 00:31:37.014 "mp_policy": "active_passive" 00:31:37.014 } 00:31:37.014 } 00:31:37.014 ]' 00:31:37.014 20:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:37.014 20:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:31:37.014 20:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:37.014 20:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:31:37.014 20:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:31:37.014 20:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:31:37.014 20:29:32 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:31:37.014 20:29:32 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:37.014 20:29:32 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:31:37.014 20:29:32 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:37.014 20:29:32 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:37.272 20:29:32 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=c4daf069-23bc-4604-871d-3454e2158b2a 00:31:37.272 20:29:32 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:31:37.272 20:29:32 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4daf069-23bc-4604-871d-3454e2158b2a 00:31:37.530 20:29:32 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:37.789 20:29:32 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=c70c68c5-66f9-400e-9998-c1dd0b734f2e 00:31:37.789 20:29:32 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c70c68c5-66f9-400e-9998-c1dd0b734f2e 00:31:38.047 20:29:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=302d8ad6-6ce5-4d09-b715-7d43bddae68e 00:31:38.047 20:29:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 302d8ad6-6ce5-4d09-b715-7d43bddae68e 00:31:38.047 20:29:33 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:31:38.047 20:29:33 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:38.047 20:29:33 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=302d8ad6-6ce5-4d09-b715-7d43bddae68e 00:31:38.047 20:29:33 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:31:38.047 20:29:33 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 302d8ad6-6ce5-4d09-b715-7d43bddae68e 00:31:38.047 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=302d8ad6-6ce5-4d09-b715-7d43bddae68e 00:31:38.047 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:38.047 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:31:38.047 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:31:38.047 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 302d8ad6-6ce5-4d09-b715-7d43bddae68e 00:31:38.047 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:38.047 { 00:31:38.047 "name": "302d8ad6-6ce5-4d09-b715-7d43bddae68e", 00:31:38.047 "aliases": [ 00:31:38.047 "lvs/nvme0n1p0" 00:31:38.047 ], 00:31:38.047 "product_name": "Logical Volume", 00:31:38.047 "block_size": 4096, 00:31:38.047 "num_blocks": 26476544, 00:31:38.047 "uuid": "302d8ad6-6ce5-4d09-b715-7d43bddae68e", 00:31:38.047 "assigned_rate_limits": { 00:31:38.048 "rw_ios_per_sec": 0, 00:31:38.048 "rw_mbytes_per_sec": 0, 00:31:38.048 "r_mbytes_per_sec": 0, 00:31:38.048 "w_mbytes_per_sec": 0 00:31:38.048 }, 00:31:38.048 "claimed": false, 00:31:38.048 "zoned": false, 00:31:38.048 "supported_io_types": { 00:31:38.048 "read": true, 00:31:38.048 "write": true, 00:31:38.048 "unmap": true, 00:31:38.048 "flush": false, 00:31:38.048 "reset": true, 00:31:38.048 "nvme_admin": false, 00:31:38.048 "nvme_io": false, 00:31:38.048 "nvme_io_md": false, 00:31:38.048 "write_zeroes": true, 00:31:38.048 "zcopy": false, 00:31:38.048 "get_zone_info": false, 00:31:38.048 "zone_management": false, 00:31:38.048 "zone_append": false, 00:31:38.048 "compare": false, 00:31:38.048 "compare_and_write": false, 00:31:38.048 "abort": false, 00:31:38.048 "seek_hole": true, 00:31:38.048 "seek_data": true, 00:31:38.048 "copy": false, 00:31:38.048 "nvme_iov_md": false 00:31:38.048 }, 00:31:38.048 "driver_specific": { 00:31:38.048 "lvol": { 00:31:38.048 "lvol_store_uuid": "c70c68c5-66f9-400e-9998-c1dd0b734f2e", 00:31:38.048 "base_bdev": "nvme0n1", 00:31:38.048 "thin_provision": true, 00:31:38.048 "num_allocated_clusters": 0, 00:31:38.048 "snapshot": false, 00:31:38.048 "clone": false, 00:31:38.048 "esnap_clone": false 00:31:38.048 } 00:31:38.048 } 00:31:38.048 } 00:31:38.048 ]' 00:31:38.048 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:38.304 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:31:38.304 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:38.304 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:38.304 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:38.304 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:31:38.304 20:29:33 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:31:38.304 20:29:33 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:31:38.304 20:29:33 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:38.560 20:29:33 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:38.560 20:29:33 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:38.560 20:29:33 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 302d8ad6-6ce5-4d09-b715-7d43bddae68e 00:31:38.560 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=302d8ad6-6ce5-4d09-b715-7d43bddae68e 00:31:38.560 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:38.560 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:31:38.560 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:31:38.560 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 302d8ad6-6ce5-4d09-b715-7d43bddae68e 00:31:38.828 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:38.828 { 00:31:38.828 "name": "302d8ad6-6ce5-4d09-b715-7d43bddae68e", 00:31:38.828 "aliases": [ 00:31:38.828 "lvs/nvme0n1p0" 00:31:38.828 ], 00:31:38.829 "product_name": "Logical Volume", 00:31:38.829 "block_size": 4096, 00:31:38.829 "num_blocks": 26476544, 00:31:38.829 "uuid": "302d8ad6-6ce5-4d09-b715-7d43bddae68e", 00:31:38.829 "assigned_rate_limits": { 00:31:38.829 "rw_ios_per_sec": 0, 00:31:38.829 "rw_mbytes_per_sec": 0, 00:31:38.829 "r_mbytes_per_sec": 0, 00:31:38.829 "w_mbytes_per_sec": 0 00:31:38.829 }, 00:31:38.829 "claimed": false, 00:31:38.829 "zoned": false, 00:31:38.829 "supported_io_types": { 00:31:38.829 "read": true, 00:31:38.829 "write": true, 00:31:38.829 "unmap": true, 00:31:38.829 "flush": false, 00:31:38.829 "reset": true, 00:31:38.829 "nvme_admin": false, 00:31:38.829 "nvme_io": false, 00:31:38.829 "nvme_io_md": false, 00:31:38.829 "write_zeroes": true, 00:31:38.829 "zcopy": false, 00:31:38.829 "get_zone_info": false, 00:31:38.829 "zone_management": false, 00:31:38.829 "zone_append": false, 00:31:38.829 "compare": false, 00:31:38.829 "compare_and_write": false, 00:31:38.829 "abort": false, 00:31:38.829 "seek_hole": true, 00:31:38.829 "seek_data": true, 00:31:38.829 "copy": false, 00:31:38.829 "nvme_iov_md": false 00:31:38.829 }, 00:31:38.829 "driver_specific": { 00:31:38.829 "lvol": { 00:31:38.829 "lvol_store_uuid": "c70c68c5-66f9-400e-9998-c1dd0b734f2e", 00:31:38.829 "base_bdev": "nvme0n1", 00:31:38.829 "thin_provision": true, 00:31:38.829 "num_allocated_clusters": 0, 00:31:38.829 "snapshot": false, 00:31:38.829 "clone": false, 00:31:38.829 "esnap_clone": false 00:31:38.829 } 00:31:38.829 } 00:31:38.829 } 00:31:38.829 ]' 00:31:38.829 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:38.829 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:31:38.829 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:38.829 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:38.829 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:38.829 20:29:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:31:38.829 20:29:33 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:31:38.829 20:29:33 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:38.829 20:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:31:38.829 20:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 302d8ad6-6ce5-4d09-b715-7d43bddae68e 00:31:38.829 20:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=302d8ad6-6ce5-4d09-b715-7d43bddae68e 00:31:38.829 20:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:38.829 20:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:31:38.829 20:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:31:38.829 20:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 302d8ad6-6ce5-4d09-b715-7d43bddae68e 00:31:39.103 20:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:39.103 { 00:31:39.103 "name": "302d8ad6-6ce5-4d09-b715-7d43bddae68e", 00:31:39.103 "aliases": [ 00:31:39.103 "lvs/nvme0n1p0" 00:31:39.103 ], 00:31:39.103 "product_name": "Logical Volume", 00:31:39.103 "block_size": 4096, 00:31:39.103 "num_blocks": 26476544, 00:31:39.104 "uuid": "302d8ad6-6ce5-4d09-b715-7d43bddae68e", 00:31:39.104 "assigned_rate_limits": { 00:31:39.104 "rw_ios_per_sec": 0, 00:31:39.104 "rw_mbytes_per_sec": 0, 00:31:39.104 "r_mbytes_per_sec": 0, 00:31:39.104 "w_mbytes_per_sec": 0 00:31:39.104 }, 00:31:39.104 "claimed": false, 00:31:39.104 "zoned": false, 00:31:39.104 "supported_io_types": { 00:31:39.104 "read": true, 00:31:39.104 "write": true, 00:31:39.104 "unmap": true, 00:31:39.104 "flush": false, 00:31:39.104 "reset": true, 00:31:39.104 "nvme_admin": false, 00:31:39.104 "nvme_io": false, 00:31:39.104 "nvme_io_md": false, 00:31:39.104 "write_zeroes": true, 00:31:39.104 "zcopy": false, 00:31:39.104 "get_zone_info": false, 00:31:39.104 "zone_management": false, 00:31:39.104 "zone_append": false, 00:31:39.104 "compare": false, 00:31:39.104 "compare_and_write": false, 00:31:39.104 "abort": false, 00:31:39.104 "seek_hole": true, 00:31:39.104 "seek_data": true, 00:31:39.104 "copy": false, 00:31:39.104 "nvme_iov_md": false 00:31:39.104 }, 00:31:39.104 "driver_specific": { 00:31:39.104 "lvol": { 00:31:39.104 "lvol_store_uuid": "c70c68c5-66f9-400e-9998-c1dd0b734f2e", 00:31:39.104 "base_bdev": "nvme0n1", 00:31:39.104 "thin_provision": true, 00:31:39.104 "num_allocated_clusters": 0, 00:31:39.104 "snapshot": false, 00:31:39.104 "clone": false, 00:31:39.104 "esnap_clone": false 00:31:39.104 } 00:31:39.104 } 00:31:39.104 } 00:31:39.104 ]' 00:31:39.104 20:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:39.104 20:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:31:39.104 20:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:39.104 20:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:39.104 20:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:39.104 20:29:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:31:39.104 20:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:31:39.104 20:29:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 302d8ad6-6ce5-4d09-b715-7d43bddae68e -c nvc0n1p0 --l2p_dram_limit 20 00:31:39.363 [2024-10-01 20:29:34.558311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.363 [2024-10-01 20:29:34.558368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:39.363 [2024-10-01 20:29:34.558383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:39.363 [2024-10-01 20:29:34.558393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.363 [2024-10-01 20:29:34.558448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.363 [2024-10-01 20:29:34.558459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:39.363 [2024-10-01 20:29:34.558468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:39.363 [2024-10-01 20:29:34.558477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.363 [2024-10-01 20:29:34.558494] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:39.363 [2024-10-01 20:29:34.559254] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:39.363 [2024-10-01 20:29:34.559282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.363 [2024-10-01 20:29:34.559291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:39.363 [2024-10-01 20:29:34.559300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:31:39.363 [2024-10-01 20:29:34.559311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.363 [2024-10-01 20:29:34.559395] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 78bbfccf-0203-4265-9808-36597fef82b9 00:31:39.363 [2024-10-01 20:29:34.560733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.363 [2024-10-01 20:29:34.560765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:39.363 [2024-10-01 20:29:34.560780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:39.363 [2024-10-01 20:29:34.560789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.363 [2024-10-01 20:29:34.567461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.363 [2024-10-01 20:29:34.567498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:39.363 [2024-10-01 20:29:34.567511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.634 ms 00:31:39.363 [2024-10-01 20:29:34.567518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.363 [2024-10-01 20:29:34.567614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.363 [2024-10-01 20:29:34.567622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:39.363 [2024-10-01 20:29:34.567636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:39.363 [2024-10-01 20:29:34.567643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.363 [2024-10-01 20:29:34.567714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.363 [2024-10-01 20:29:34.567724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:39.363 [2024-10-01 20:29:34.567736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:39.363 [2024-10-01 20:29:34.567743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.363 [2024-10-01 20:29:34.567766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:39.363 [2024-10-01 20:29:34.571683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.363 [2024-10-01 20:29:34.571737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:39.363 [2024-10-01 20:29:34.571748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.926 ms 00:31:39.363 [2024-10-01 20:29:34.571758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.363 [2024-10-01 20:29:34.571796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.363 [2024-10-01 20:29:34.571806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:39.363 [2024-10-01 20:29:34.571814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:39.363 [2024-10-01 20:29:34.571823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.363 [2024-10-01 20:29:34.571870] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:39.363 [2024-10-01 20:29:34.572011] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:39.363 [2024-10-01 20:29:34.572023] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:39.363 [2024-10-01 20:29:34.572036] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:39.363 [2024-10-01 20:29:34.572046] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:39.363 [2024-10-01 20:29:34.572057] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:39.363 [2024-10-01 20:29:34.572065] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:39.363 [2024-10-01 20:29:34.572076] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:39.363 [2024-10-01 20:29:34.572083] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:39.363 [2024-10-01 20:29:34.572092] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:39.363 [2024-10-01 20:29:34.572099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.363 [2024-10-01 20:29:34.572109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:39.363 [2024-10-01 20:29:34.572117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:31:39.363 [2024-10-01 20:29:34.572126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.363 [2024-10-01 20:29:34.572206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.363 [2024-10-01 20:29:34.572218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:39.363 [2024-10-01 20:29:34.572225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:31:39.363 [2024-10-01 20:29:34.572238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.363 [2024-10-01 20:29:34.572327] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:39.363 [2024-10-01 20:29:34.572337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:39.363 [2024-10-01 20:29:34.572345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:39.363 [2024-10-01 20:29:34.572354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:39.363 [2024-10-01 20:29:34.572362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:39.363 [2024-10-01 20:29:34.572370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:39.363 [2024-10-01 20:29:34.572377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:39.363 [2024-10-01 20:29:34.572386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:39.363 [2024-10-01 20:29:34.572392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:39.363 [2024-10-01 20:29:34.572401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:39.363 [2024-10-01 20:29:34.572408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:39.363 [2024-10-01 20:29:34.572425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:39.363 [2024-10-01 20:29:34.572431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:39.363 [2024-10-01 20:29:34.572440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:39.363 [2024-10-01 20:29:34.572447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:39.363 [2024-10-01 20:29:34.572457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:39.363 [2024-10-01 20:29:34.572463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:39.363 [2024-10-01 20:29:34.572472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:39.363 [2024-10-01 20:29:34.572478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:39.363 [2024-10-01 20:29:34.572488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:39.363 [2024-10-01 20:29:34.572495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:39.363 [2024-10-01 20:29:34.572510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:39.363 [2024-10-01 20:29:34.572517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:39.363 [2024-10-01 20:29:34.572526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:39.364 [2024-10-01 20:29:34.572533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:39.364 [2024-10-01 20:29:34.572541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:39.364 [2024-10-01 20:29:34.572548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:39.364 [2024-10-01 20:29:34.572556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:39.364 [2024-10-01 20:29:34.572562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:39.364 [2024-10-01 20:29:34.572571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:39.364 [2024-10-01 20:29:34.572577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:39.364 [2024-10-01 20:29:34.572588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:39.364 [2024-10-01 20:29:34.572594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:39.364 [2024-10-01 20:29:34.572602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:39.364 [2024-10-01 20:29:34.572608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:39.364 [2024-10-01 20:29:34.572617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:39.364 [2024-10-01 20:29:34.572623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:39.364 [2024-10-01 20:29:34.572632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:39.364 [2024-10-01 20:29:34.572647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:39.364 [2024-10-01 20:29:34.572656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:39.364 [2024-10-01 20:29:34.572662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:39.364 [2024-10-01 20:29:34.572671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:39.364 [2024-10-01 20:29:34.572677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:39.364 [2024-10-01 20:29:34.572685] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:39.364 [2024-10-01 20:29:34.572713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:39.364 [2024-10-01 20:29:34.572723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:39.364 [2024-10-01 20:29:34.572730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:39.364 [2024-10-01 20:29:34.572744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:39.364 [2024-10-01 20:29:34.572751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:39.364 [2024-10-01 20:29:34.572760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:39.364 [2024-10-01 20:29:34.572768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:39.364 [2024-10-01 20:29:34.572776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:39.364 [2024-10-01 20:29:34.572785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:39.364 [2024-10-01 20:29:34.572798] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:39.364 [2024-10-01 20:29:34.572810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:39.364 [2024-10-01 20:29:34.572820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:39.364 [2024-10-01 20:29:34.572827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:39.364 [2024-10-01 20:29:34.572836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:39.364 [2024-10-01 20:29:34.572843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:39.364 [2024-10-01 20:29:34.572851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:39.364 [2024-10-01 20:29:34.572858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:39.364 [2024-10-01 20:29:34.572868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:39.364 [2024-10-01 20:29:34.572875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:39.364 [2024-10-01 20:29:34.572885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:39.364 [2024-10-01 20:29:34.572892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:39.364 [2024-10-01 20:29:34.572901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:39.364 [2024-10-01 20:29:34.572908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:39.364 [2024-10-01 20:29:34.572916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:39.364 [2024-10-01 20:29:34.572923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:39.364 [2024-10-01 20:29:34.572933] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:39.364 [2024-10-01 20:29:34.572941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:39.364 [2024-10-01 20:29:34.572951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:39.364 [2024-10-01 20:29:34.572958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:39.364 [2024-10-01 20:29:34.572967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:39.364 [2024-10-01 20:29:34.572974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:39.364 [2024-10-01 20:29:34.572983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.364 [2024-10-01 20:29:34.572989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:39.364 [2024-10-01 20:29:34.572999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:31:39.364 [2024-10-01 20:29:34.573006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.364 [2024-10-01 20:29:34.573041] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:39.364 [2024-10-01 20:29:34.573049] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:41.889 [2024-10-01 20:29:36.499774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.499842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:41.889 [2024-10-01 20:29:36.499861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1926.712 ms 00:31:41.889 [2024-10-01 20:29:36.499870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.526640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.526713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:41.889 [2024-10-01 20:29:36.526731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.540 ms 00:31:41.889 [2024-10-01 20:29:36.526741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.526881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.526892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:41.889 [2024-10-01 20:29:36.526908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:31:41.889 [2024-10-01 20:29:36.526917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.559747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.559794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:41.889 [2024-10-01 20:29:36.559812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.795 ms 00:31:41.889 [2024-10-01 20:29:36.559820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.559860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.559868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:41.889 [2024-10-01 20:29:36.559878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:41.889 [2024-10-01 20:29:36.559886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.560336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.560357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:41.889 [2024-10-01 20:29:36.560368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:31:41.889 [2024-10-01 20:29:36.560376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.560510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.560526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:41.889 [2024-10-01 20:29:36.560538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:31:41.889 [2024-10-01 20:29:36.560545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.573773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.573820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:41.889 [2024-10-01 20:29:36.573835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.208 ms 00:31:41.889 [2024-10-01 20:29:36.573843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.585637] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:31:41.889 [2024-10-01 20:29:36.591668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.591733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:41.889 [2024-10-01 20:29:36.591746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.739 ms 00:31:41.889 [2024-10-01 20:29:36.591756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.643797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.643858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:41.889 [2024-10-01 20:29:36.643872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.006 ms 00:31:41.889 [2024-10-01 20:29:36.643883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.644076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.644091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:41.889 [2024-10-01 20:29:36.644100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:31:41.889 [2024-10-01 20:29:36.644109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.668241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.668300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:41.889 [2024-10-01 20:29:36.668314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.074 ms 00:31:41.889 [2024-10-01 20:29:36.668326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.691872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.692111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:41.889 [2024-10-01 20:29:36.692131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.497 ms 00:31:41.889 [2024-10-01 20:29:36.692140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.692768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.692789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:41.889 [2024-10-01 20:29:36.692801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:31:41.889 [2024-10-01 20:29:36.692810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.763488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.763730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:41.889 [2024-10-01 20:29:36.763750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.631 ms 00:31:41.889 [2024-10-01 20:29:36.763760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.789680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.789749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:41.889 [2024-10-01 20:29:36.789762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.817 ms 00:31:41.889 [2024-10-01 20:29:36.789772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.889 [2024-10-01 20:29:36.816032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.889 [2024-10-01 20:29:36.816093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:41.889 [2024-10-01 20:29:36.816106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.204 ms 00:31:41.889 [2024-10-01 20:29:36.816115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.890 [2024-10-01 20:29:36.841443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.890 [2024-10-01 20:29:36.841506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:41.890 [2024-10-01 20:29:36.841520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.273 ms 00:31:41.890 [2024-10-01 20:29:36.841530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.890 [2024-10-01 20:29:36.841587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.890 [2024-10-01 20:29:36.841602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:41.890 [2024-10-01 20:29:36.841611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:41.890 [2024-10-01 20:29:36.841621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.890 [2024-10-01 20:29:36.841721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.890 [2024-10-01 20:29:36.841736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:41.890 [2024-10-01 20:29:36.841745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:41.890 [2024-10-01 20:29:36.841754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.890 [2024-10-01 20:29:36.842901] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2284.117 ms, result 0 00:31:41.890 { 00:31:41.890 "name": "ftl0", 00:31:41.890 "uuid": "78bbfccf-0203-4265-9808-36597fef82b9" 00:31:41.890 } 00:31:41.890 20:29:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:31:41.890 20:29:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:31:41.890 20:29:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:31:41.890 20:29:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:31:42.146 [2024-10-01 20:29:37.166921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:31:42.146 I/O size of 69632 is greater than zero copy threshold (65536). 00:31:42.146 Zero copy mechanism will not be used. 00:31:42.146 Running I/O for 4 seconds... 00:31:46.398 3167.00 IOPS, 210.31 MiB/s 3310.50 IOPS, 219.84 MiB/s 3222.00 IOPS, 213.96 MiB/s 3137.00 IOPS, 208.32 MiB/s 00:31:46.398 Latency(us) 00:31:46.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.398 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:31:46.398 ftl0 : 4.00 3135.74 208.23 0.00 0.00 334.87 148.09 2054.30 00:31:46.398 =================================================================================================================== 00:31:46.398 Total : 3135.74 208.23 0.00 0.00 334.87 148.09 2054.30 00:31:46.398 [2024-10-01 20:29:41.178292] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:31:46.398 { 00:31:46.398 "results": [ 00:31:46.398 { 00:31:46.398 "job": "ftl0", 00:31:46.398 "core_mask": "0x1", 00:31:46.398 "workload": "randwrite", 00:31:46.398 "status": "finished", 00:31:46.398 "queue_depth": 1, 00:31:46.398 "io_size": 69632, 00:31:46.398 "runtime": 4.001931, 00:31:46.398 "iops": 3135.7362233381837, 00:31:46.398 "mibps": 208.23248358105127, 00:31:46.398 "io_failed": 0, 00:31:46.398 "io_timeout": 0, 00:31:46.398 "avg_latency_us": 334.87358477843776, 00:31:46.398 "min_latency_us": 148.08615384615385, 00:31:46.398 "max_latency_us": 2054.3015384615383 00:31:46.398 } 00:31:46.398 ], 00:31:46.398 "core_count": 1 00:31:46.398 } 00:31:46.398 20:29:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:31:46.398 [2024-10-01 20:29:41.280319] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:31:46.398 Running I/O for 4 seconds... 00:31:50.134 10197.00 IOPS, 39.83 MiB/s 9798.50 IOPS, 38.28 MiB/s 9729.67 IOPS, 38.01 MiB/s 9847.00 IOPS, 38.46 MiB/s 00:31:50.134 Latency(us) 00:31:50.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.134 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.134 ftl0 : 4.02 9837.70 38.43 0.00 0.00 12983.85 191.41 34482.02 00:31:50.134 =================================================================================================================== 00:31:50.134 Total : 9837.70 38.43 0.00 0.00 12983.85 0.00 34482.02 00:31:50.134 [2024-10-01 20:29:45.304890] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:31:50.134 { 00:31:50.134 "results": [ 00:31:50.134 { 00:31:50.134 "job": "ftl0", 00:31:50.134 "core_mask": "0x1", 00:31:50.134 "workload": "randwrite", 00:31:50.134 "status": "finished", 00:31:50.134 "queue_depth": 128, 00:31:50.134 "io_size": 4096, 00:31:50.134 "runtime": 4.01608, 00:31:50.134 "iops": 9837.70243620645, 00:31:50.134 "mibps": 38.42852514143144, 00:31:50.134 "io_failed": 0, 00:31:50.134 "io_timeout": 0, 00:31:50.134 "avg_latency_us": 12983.853254078429, 00:31:50.134 "min_latency_us": 191.40923076923076, 00:31:50.134 "max_latency_us": 34482.018461538464 00:31:50.134 } 00:31:50.134 ], 00:31:50.134 "core_count": 1 00:31:50.134 } 00:31:50.134 20:29:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:31:50.392 [2024-10-01 20:29:45.411514] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:31:50.392 Running I/O for 4 seconds... 00:31:54.563 7714.00 IOPS, 30.13 MiB/s 7930.00 IOPS, 30.98 MiB/s 8180.00 IOPS, 31.95 MiB/s 8238.00 IOPS, 32.18 MiB/s 00:31:54.563 Latency(us) 00:31:54.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.563 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:54.563 Verification LBA range: start 0x0 length 0x1400000 00:31:54.563 ftl0 : 4.01 8251.18 32.23 0.00 0.00 15465.01 225.28 28634.19 00:31:54.563 =================================================================================================================== 00:31:54.563 Total : 8251.18 32.23 0.00 0.00 15465.01 0.00 28634.19 00:31:54.563 [2024-10-01 20:29:49.435724] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:31:54.563 { 00:31:54.563 "results": [ 00:31:54.563 { 00:31:54.563 "job": "ftl0", 00:31:54.563 "core_mask": "0x1", 00:31:54.563 "workload": "verify", 00:31:54.563 "status": "finished", 00:31:54.563 "verify_range": { 00:31:54.563 "start": 0, 00:31:54.563 "length": 20971520 00:31:54.563 }, 00:31:54.563 "queue_depth": 128, 00:31:54.563 "io_size": 4096, 00:31:54.563 "runtime": 4.009001, 00:31:54.563 "iops": 8251.1827759584, 00:31:54.563 "mibps": 32.2311827185875, 00:31:54.563 "io_failed": 0, 00:31:54.563 "io_timeout": 0, 00:31:54.563 "avg_latency_us": 15465.013298234762, 00:31:54.563 "min_latency_us": 225.28, 00:31:54.563 "max_latency_us": 28634.19076923077 00:31:54.563 } 00:31:54.563 ], 00:31:54.563 "core_count": 1 00:31:54.563 } 00:31:54.563 20:29:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:31:54.563 [2024-10-01 20:29:49.646193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.563 [2024-10-01 20:29:49.646259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:54.563 [2024-10-01 20:29:49.646274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:54.563 [2024-10-01 20:29:49.646284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.563 [2024-10-01 20:29:49.646306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:54.563 [2024-10-01 20:29:49.649091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.563 [2024-10-01 20:29:49.649274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:54.563 [2024-10-01 20:29:49.649295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.763 ms 00:31:54.563 [2024-10-01 20:29:49.649304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.563 [2024-10-01 20:29:49.650630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.563 [2024-10-01 20:29:49.650660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:54.563 [2024-10-01 20:29:49.650671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:31:54.563 [2024-10-01 20:29:49.650678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.822 [2024-10-01 20:29:49.799939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.822 [2024-10-01 20:29:49.800154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:54.822 [2024-10-01 20:29:49.800180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 149.217 ms 00:31:54.822 [2024-10-01 20:29:49.800189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.822 [2024-10-01 20:29:49.806484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.822 [2024-10-01 20:29:49.806528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:54.822 [2024-10-01 20:29:49.806542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.256 ms 00:31:54.822 [2024-10-01 20:29:49.806550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.822 [2024-10-01 20:29:49.831170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.822 [2024-10-01 20:29:49.831376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:54.822 [2024-10-01 20:29:49.831397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.545 ms 00:31:54.822 [2024-10-01 20:29:49.831405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.822 [2024-10-01 20:29:49.847107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.822 [2024-10-01 20:29:49.847159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:54.822 [2024-10-01 20:29:49.847177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.657 ms 00:31:54.822 [2024-10-01 20:29:49.847186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.822 [2024-10-01 20:29:49.847357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.822 [2024-10-01 20:29:49.847368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:54.822 [2024-10-01 20:29:49.847383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:31:54.822 [2024-10-01 20:29:49.847393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.822 [2024-10-01 20:29:49.871063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.822 [2024-10-01 20:29:49.871111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:54.822 [2024-10-01 20:29:49.871125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.648 ms 00:31:54.822 [2024-10-01 20:29:49.871133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.822 [2024-10-01 20:29:49.894448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.822 [2024-10-01 20:29:49.894684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:54.822 [2024-10-01 20:29:49.894720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.260 ms 00:31:54.822 [2024-10-01 20:29:49.894728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.822 [2024-10-01 20:29:49.919131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.822 [2024-10-01 20:29:49.919184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:54.822 [2024-10-01 20:29:49.919199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.355 ms 00:31:54.822 [2024-10-01 20:29:49.919207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.822 [2024-10-01 20:29:49.942833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.822 [2024-10-01 20:29:49.942881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:54.822 [2024-10-01 20:29:49.942897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.526 ms 00:31:54.823 [2024-10-01 20:29:49.942905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.823 [2024-10-01 20:29:49.942952] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:54.823 [2024-10-01 20:29:49.942971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.942983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.942991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:54.823 [2024-10-01 20:29:49.943629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:54.824 [2024-10-01 20:29:49.943885] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:54.824 [2024-10-01 20:29:49.943895] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78bbfccf-0203-4265-9808-36597fef82b9 00:31:54.824 [2024-10-01 20:29:49.943902] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:54.824 [2024-10-01 20:29:49.943911] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:54.824 [2024-10-01 20:29:49.943919] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:54.824 [2024-10-01 20:29:49.943928] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:54.824 [2024-10-01 20:29:49.943935] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:54.824 [2024-10-01 20:29:49.943944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:54.824 [2024-10-01 20:29:49.943951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:54.824 [2024-10-01 20:29:49.943961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:54.824 [2024-10-01 20:29:49.943967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:54.824 [2024-10-01 20:29:49.943976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.824 [2024-10-01 20:29:49.943984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:54.824 [2024-10-01 20:29:49.943996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:31:54.824 [2024-10-01 20:29:49.944003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.824 [2024-10-01 20:29:49.956828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.824 [2024-10-01 20:29:49.957008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:54.824 [2024-10-01 20:29:49.957035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.782 ms 00:31:54.824 [2024-10-01 20:29:49.957043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.824 [2024-10-01 20:29:49.957410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.824 [2024-10-01 20:29:49.957421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:54.824 [2024-10-01 20:29:49.957431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:31:54.824 [2024-10-01 20:29:49.957438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.824 [2024-10-01 20:29:49.988471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.824 [2024-10-01 20:29:49.988673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:54.824 [2024-10-01 20:29:49.988718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.824 [2024-10-01 20:29:49.988728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.824 [2024-10-01 20:29:49.988800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.824 [2024-10-01 20:29:49.988811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:54.824 [2024-10-01 20:29:49.988821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.824 [2024-10-01 20:29:49.988828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.824 [2024-10-01 20:29:49.988929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.824 [2024-10-01 20:29:49.988939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:54.824 [2024-10-01 20:29:49.988949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.824 [2024-10-01 20:29:49.988956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.824 [2024-10-01 20:29:49.988972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.824 [2024-10-01 20:29:49.988980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:54.824 [2024-10-01 20:29:49.988991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.824 [2024-10-01 20:29:49.988998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.082 [2024-10-01 20:29:50.070121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.082 [2024-10-01 20:29:50.070177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:55.082 [2024-10-01 20:29:50.070195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.082 [2024-10-01 20:29:50.070203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.082 [2024-10-01 20:29:50.135664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.082 [2024-10-01 20:29:50.135888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:55.082 [2024-10-01 20:29:50.135913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.082 [2024-10-01 20:29:50.135921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.082 [2024-10-01 20:29:50.135998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.082 [2024-10-01 20:29:50.136008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:55.082 [2024-10-01 20:29:50.136018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.082 [2024-10-01 20:29:50.136026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.082 [2024-10-01 20:29:50.136084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.082 [2024-10-01 20:29:50.136093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:55.082 [2024-10-01 20:29:50.136103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.082 [2024-10-01 20:29:50.136113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.082 [2024-10-01 20:29:50.136205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.082 [2024-10-01 20:29:50.136214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:55.082 [2024-10-01 20:29:50.136227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.082 [2024-10-01 20:29:50.136234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.082 [2024-10-01 20:29:50.136263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.082 [2024-10-01 20:29:50.136272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:55.082 [2024-10-01 20:29:50.136281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.082 [2024-10-01 20:29:50.136289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.082 [2024-10-01 20:29:50.136324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.082 [2024-10-01 20:29:50.136332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:55.082 [2024-10-01 20:29:50.136342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.082 [2024-10-01 20:29:50.136350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.082 [2024-10-01 20:29:50.136390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.082 [2024-10-01 20:29:50.136399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:55.082 [2024-10-01 20:29:50.136409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.082 [2024-10-01 20:29:50.136417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.082 [2024-10-01 20:29:50.136539] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 490.306 ms, result 0 00:31:55.082 true 00:31:55.082 20:29:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 74142 00:31:55.082 20:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 74142 ']' 00:31:55.082 20:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 74142 00:31:55.082 20:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:31:55.082 20:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:55.082 20:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74142 00:31:55.082 20:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:55.082 20:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:55.082 20:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74142' 00:31:55.082 killing process with pid 74142 00:31:55.082 20:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 74142 00:31:55.082 Received shutdown signal, test time was about 4.000000 seconds 00:31:55.082 00:31:55.082 Latency(us) 00:31:55.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.082 =================================================================================================================== 00:31:55.082 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:55.082 20:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 74142 00:32:05.121 20:29:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:05.121 20:29:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:32:05.121 Remove shared memory files 00:32:05.121 20:29:59 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:05.121 20:29:59 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:32:05.121 20:29:59 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:32:05.121 20:29:59 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:32:05.121 20:29:59 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:05.121 20:29:59 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:32:05.121 ************************************ 00:32:05.121 END TEST ftl_bdevperf 00:32:05.121 ************************************ 00:32:05.121 00:32:05.121 real 0m29.584s 00:32:05.121 user 0m32.259s 00:32:05.121 sys 0m1.097s 00:32:05.121 20:29:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:05.121 20:29:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 20:29:59 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:32:05.121 20:29:59 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:05.121 20:29:59 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:05.121 20:29:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 ************************************ 00:32:05.121 START TEST ftl_trim 00:32:05.121 ************************************ 00:32:05.121 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:32:05.121 * Looking for test storage... 00:32:05.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:05.121 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:05.121 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:32:05.121 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:05.121 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.121 20:30:00 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:32:05.121 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.121 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:05.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.121 --rc genhtml_branch_coverage=1 00:32:05.121 --rc genhtml_function_coverage=1 00:32:05.121 --rc genhtml_legend=1 00:32:05.121 --rc geninfo_all_blocks=1 00:32:05.121 --rc geninfo_unexecuted_blocks=1 00:32:05.121 00:32:05.121 ' 00:32:05.121 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:05.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.121 --rc genhtml_branch_coverage=1 00:32:05.121 --rc genhtml_function_coverage=1 00:32:05.121 --rc genhtml_legend=1 00:32:05.121 --rc geninfo_all_blocks=1 00:32:05.121 --rc geninfo_unexecuted_blocks=1 00:32:05.121 00:32:05.121 ' 00:32:05.122 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:05.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.122 --rc genhtml_branch_coverage=1 00:32:05.122 --rc genhtml_function_coverage=1 00:32:05.122 --rc genhtml_legend=1 00:32:05.122 --rc geninfo_all_blocks=1 00:32:05.122 --rc geninfo_unexecuted_blocks=1 00:32:05.122 00:32:05.122 ' 00:32:05.122 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:05.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.122 --rc genhtml_branch_coverage=1 00:32:05.122 --rc genhtml_function_coverage=1 00:32:05.122 --rc genhtml_legend=1 00:32:05.122 --rc geninfo_all_blocks=1 00:32:05.122 --rc geninfo_unexecuted_blocks=1 00:32:05.122 00:32:05.122 ' 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=74485 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 74485 00:32:05.122 20:30:00 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:32:05.122 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74485 ']' 00:32:05.122 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.122 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:05.122 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.122 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:05.122 20:30:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:32:05.122 [2024-10-01 20:30:00.227738] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:32:05.122 [2024-10-01 20:30:00.227866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74485 ] 00:32:05.379 [2024-10-01 20:30:00.378829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:05.379 [2024-10-01 20:30:00.578998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.379 [2024-10-01 20:30:00.579207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:05.379 [2024-10-01 20:30:00.579413] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.365 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:06.365 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:32:06.365 20:30:01 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:32:06.365 20:30:01 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:32:06.365 20:30:01 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:06.365 20:30:01 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:32:06.365 20:30:01 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:32:06.365 20:30:01 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:06.688 20:30:01 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:32:06.688 20:30:01 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:32:06.688 20:30:01 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:32:06.688 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:32:06.688 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:32:06.688 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:32:06.688 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:32:06.688 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:32:06.946 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:32:06.946 { 00:32:06.946 "name": "nvme0n1", 00:32:06.946 "aliases": [ 00:32:06.946 "afaa4f17-f3c4-45ff-a27a-0e394b8c73ad" 00:32:06.947 ], 00:32:06.947 "product_name": "NVMe disk", 00:32:06.947 "block_size": 4096, 00:32:06.947 "num_blocks": 1310720, 00:32:06.947 "uuid": "afaa4f17-f3c4-45ff-a27a-0e394b8c73ad", 00:32:06.947 "numa_id": -1, 00:32:06.947 "assigned_rate_limits": { 00:32:06.947 "rw_ios_per_sec": 0, 00:32:06.947 "rw_mbytes_per_sec": 0, 00:32:06.947 "r_mbytes_per_sec": 0, 00:32:06.947 "w_mbytes_per_sec": 0 00:32:06.947 }, 00:32:06.947 "claimed": true, 00:32:06.947 "claim_type": "read_many_write_one", 00:32:06.947 "zoned": false, 00:32:06.947 "supported_io_types": { 00:32:06.947 "read": true, 00:32:06.947 "write": true, 00:32:06.947 "unmap": true, 00:32:06.947 "flush": true, 00:32:06.947 "reset": true, 00:32:06.947 "nvme_admin": true, 00:32:06.947 "nvme_io": true, 00:32:06.947 "nvme_io_md": false, 00:32:06.947 "write_zeroes": true, 00:32:06.947 "zcopy": false, 00:32:06.947 "get_zone_info": false, 00:32:06.947 "zone_management": false, 00:32:06.947 "zone_append": false, 00:32:06.947 "compare": true, 00:32:06.947 "compare_and_write": false, 00:32:06.947 "abort": true, 00:32:06.947 "seek_hole": false, 00:32:06.947 "seek_data": false, 00:32:06.947 "copy": true, 00:32:06.947 "nvme_iov_md": false 00:32:06.947 }, 00:32:06.947 "driver_specific": { 00:32:06.947 "nvme": [ 00:32:06.947 { 00:32:06.947 "pci_address": "0000:00:11.0", 00:32:06.947 "trid": { 00:32:06.947 "trtype": "PCIe", 00:32:06.947 "traddr": "0000:00:11.0" 00:32:06.947 }, 00:32:06.947 "ctrlr_data": { 00:32:06.947 "cntlid": 0, 00:32:06.947 "vendor_id": "0x1b36", 00:32:06.947 "model_number": "QEMU NVMe Ctrl", 00:32:06.947 "serial_number": "12341", 00:32:06.947 "firmware_revision": "8.0.0", 00:32:06.947 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:06.947 "oacs": { 00:32:06.947 "security": 0, 00:32:06.947 "format": 1, 00:32:06.947 "firmware": 0, 00:32:06.947 "ns_manage": 1 00:32:06.947 }, 00:32:06.947 "multi_ctrlr": false, 00:32:06.947 "ana_reporting": false 00:32:06.947 }, 00:32:06.947 "vs": { 00:32:06.947 "nvme_version": "1.4" 00:32:06.947 }, 00:32:06.947 "ns_data": { 00:32:06.947 "id": 1, 00:32:06.947 "can_share": false 00:32:06.947 } 00:32:06.947 } 00:32:06.947 ], 00:32:06.947 "mp_policy": "active_passive" 00:32:06.947 } 00:32:06.947 } 00:32:06.947 ]' 00:32:06.947 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:32:06.947 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:32:06.947 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:32:06.947 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:32:06.947 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:32:06.947 20:30:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:32:06.947 20:30:01 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:32:06.947 20:30:01 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:32:06.947 20:30:01 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:32:06.947 20:30:01 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:06.947 20:30:01 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:07.205 20:30:02 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=c70c68c5-66f9-400e-9998-c1dd0b734f2e 00:32:07.205 20:30:02 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:32:07.205 20:30:02 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c70c68c5-66f9-400e-9998-c1dd0b734f2e 00:32:07.462 20:30:02 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:32:07.720 20:30:02 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=23cec6d0-098c-4ef7-a292-af48d46ea47a 00:32:07.721 20:30:02 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 23cec6d0-098c-4ef7-a292-af48d46ea47a 00:32:07.978 20:30:02 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=60492f8d-cf56-4110-8792-b800448ecabc 00:32:07.978 20:30:02 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 60492f8d-cf56-4110-8792-b800448ecabc 00:32:07.978 20:30:02 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:32:07.978 20:30:02 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:07.978 20:30:02 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=60492f8d-cf56-4110-8792-b800448ecabc 00:32:07.978 20:30:02 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:32:07.978 20:30:02 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 60492f8d-cf56-4110-8792-b800448ecabc 00:32:07.978 20:30:02 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=60492f8d-cf56-4110-8792-b800448ecabc 00:32:07.978 20:30:02 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:32:07.978 20:30:02 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:32:07.978 20:30:02 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:32:07.978 20:30:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60492f8d-cf56-4110-8792-b800448ecabc 00:32:07.978 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:32:07.978 { 00:32:07.978 "name": "60492f8d-cf56-4110-8792-b800448ecabc", 00:32:07.978 "aliases": [ 00:32:07.978 "lvs/nvme0n1p0" 00:32:07.978 ], 00:32:07.978 "product_name": "Logical Volume", 00:32:07.978 "block_size": 4096, 00:32:07.978 "num_blocks": 26476544, 00:32:07.978 "uuid": "60492f8d-cf56-4110-8792-b800448ecabc", 00:32:07.978 "assigned_rate_limits": { 00:32:07.978 "rw_ios_per_sec": 0, 00:32:07.978 "rw_mbytes_per_sec": 0, 00:32:07.978 "r_mbytes_per_sec": 0, 00:32:07.978 "w_mbytes_per_sec": 0 00:32:07.978 }, 00:32:07.978 "claimed": false, 00:32:07.978 "zoned": false, 00:32:07.978 "supported_io_types": { 00:32:07.978 "read": true, 00:32:07.978 "write": true, 00:32:07.978 "unmap": true, 00:32:07.978 "flush": false, 00:32:07.978 "reset": true, 00:32:07.978 "nvme_admin": false, 00:32:07.978 "nvme_io": false, 00:32:07.978 "nvme_io_md": false, 00:32:07.978 "write_zeroes": true, 00:32:07.978 "zcopy": false, 00:32:07.978 "get_zone_info": false, 00:32:07.978 "zone_management": false, 00:32:07.978 "zone_append": false, 00:32:07.978 "compare": false, 00:32:07.978 "compare_and_write": false, 00:32:07.978 "abort": false, 00:32:07.978 "seek_hole": true, 00:32:07.978 "seek_data": true, 00:32:07.978 "copy": false, 00:32:07.978 "nvme_iov_md": false 00:32:07.978 }, 00:32:07.978 "driver_specific": { 00:32:07.978 "lvol": { 00:32:07.978 "lvol_store_uuid": "23cec6d0-098c-4ef7-a292-af48d46ea47a", 00:32:07.978 "base_bdev": "nvme0n1", 00:32:07.978 "thin_provision": true, 00:32:07.978 "num_allocated_clusters": 0, 00:32:07.978 "snapshot": false, 00:32:07.978 "clone": false, 00:32:07.978 "esnap_clone": false 00:32:07.978 } 00:32:07.978 } 00:32:07.978 } 00:32:07.978 ]' 00:32:07.978 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:32:07.978 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:32:07.978 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:32:08.235 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:32:08.235 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:32:08.235 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:32:08.235 20:30:03 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:32:08.235 20:30:03 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:32:08.235 20:30:03 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:32:08.492 20:30:03 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:32:08.492 20:30:03 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:32:08.492 20:30:03 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 60492f8d-cf56-4110-8792-b800448ecabc 00:32:08.492 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=60492f8d-cf56-4110-8792-b800448ecabc 00:32:08.492 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:32:08.492 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:32:08.492 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:32:08.492 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60492f8d-cf56-4110-8792-b800448ecabc 00:32:08.492 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:32:08.492 { 00:32:08.492 "name": "60492f8d-cf56-4110-8792-b800448ecabc", 00:32:08.492 "aliases": [ 00:32:08.492 "lvs/nvme0n1p0" 00:32:08.492 ], 00:32:08.492 "product_name": "Logical Volume", 00:32:08.492 "block_size": 4096, 00:32:08.492 "num_blocks": 26476544, 00:32:08.492 "uuid": "60492f8d-cf56-4110-8792-b800448ecabc", 00:32:08.492 "assigned_rate_limits": { 00:32:08.492 "rw_ios_per_sec": 0, 00:32:08.492 "rw_mbytes_per_sec": 0, 00:32:08.492 "r_mbytes_per_sec": 0, 00:32:08.492 "w_mbytes_per_sec": 0 00:32:08.493 }, 00:32:08.493 "claimed": false, 00:32:08.493 "zoned": false, 00:32:08.493 "supported_io_types": { 00:32:08.493 "read": true, 00:32:08.493 "write": true, 00:32:08.493 "unmap": true, 00:32:08.493 "flush": false, 00:32:08.493 "reset": true, 00:32:08.493 "nvme_admin": false, 00:32:08.493 "nvme_io": false, 00:32:08.493 "nvme_io_md": false, 00:32:08.493 "write_zeroes": true, 00:32:08.493 "zcopy": false, 00:32:08.493 "get_zone_info": false, 00:32:08.493 "zone_management": false, 00:32:08.493 "zone_append": false, 00:32:08.493 "compare": false, 00:32:08.493 "compare_and_write": false, 00:32:08.493 "abort": false, 00:32:08.493 "seek_hole": true, 00:32:08.493 "seek_data": true, 00:32:08.493 "copy": false, 00:32:08.493 "nvme_iov_md": false 00:32:08.493 }, 00:32:08.493 "driver_specific": { 00:32:08.493 "lvol": { 00:32:08.493 "lvol_store_uuid": "23cec6d0-098c-4ef7-a292-af48d46ea47a", 00:32:08.493 "base_bdev": "nvme0n1", 00:32:08.493 "thin_provision": true, 00:32:08.493 "num_allocated_clusters": 0, 00:32:08.493 "snapshot": false, 00:32:08.493 "clone": false, 00:32:08.493 "esnap_clone": false 00:32:08.493 } 00:32:08.493 } 00:32:08.493 } 00:32:08.493 ]' 00:32:08.493 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:32:08.493 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:32:08.493 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:32:08.751 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:32:08.751 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:32:08.751 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:32:08.751 20:30:03 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:32:08.751 20:30:03 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:32:08.751 20:30:03 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:32:08.751 20:30:03 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:32:08.751 20:30:03 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 60492f8d-cf56-4110-8792-b800448ecabc 00:32:08.751 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=60492f8d-cf56-4110-8792-b800448ecabc 00:32:08.751 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:32:08.751 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:32:08.751 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:32:08.751 20:30:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60492f8d-cf56-4110-8792-b800448ecabc 00:32:09.008 20:30:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:32:09.008 { 00:32:09.008 "name": "60492f8d-cf56-4110-8792-b800448ecabc", 00:32:09.008 "aliases": [ 00:32:09.008 "lvs/nvme0n1p0" 00:32:09.008 ], 00:32:09.008 "product_name": "Logical Volume", 00:32:09.008 "block_size": 4096, 00:32:09.008 "num_blocks": 26476544, 00:32:09.008 "uuid": "60492f8d-cf56-4110-8792-b800448ecabc", 00:32:09.008 "assigned_rate_limits": { 00:32:09.008 "rw_ios_per_sec": 0, 00:32:09.008 "rw_mbytes_per_sec": 0, 00:32:09.008 "r_mbytes_per_sec": 0, 00:32:09.008 "w_mbytes_per_sec": 0 00:32:09.008 }, 00:32:09.008 "claimed": false, 00:32:09.008 "zoned": false, 00:32:09.008 "supported_io_types": { 00:32:09.008 "read": true, 00:32:09.008 "write": true, 00:32:09.008 "unmap": true, 00:32:09.008 "flush": false, 00:32:09.008 "reset": true, 00:32:09.008 "nvme_admin": false, 00:32:09.008 "nvme_io": false, 00:32:09.008 "nvme_io_md": false, 00:32:09.008 "write_zeroes": true, 00:32:09.008 "zcopy": false, 00:32:09.008 "get_zone_info": false, 00:32:09.008 "zone_management": false, 00:32:09.008 "zone_append": false, 00:32:09.008 "compare": false, 00:32:09.008 "compare_and_write": false, 00:32:09.008 "abort": false, 00:32:09.008 "seek_hole": true, 00:32:09.008 "seek_data": true, 00:32:09.008 "copy": false, 00:32:09.008 "nvme_iov_md": false 00:32:09.008 }, 00:32:09.008 "driver_specific": { 00:32:09.008 "lvol": { 00:32:09.008 "lvol_store_uuid": "23cec6d0-098c-4ef7-a292-af48d46ea47a", 00:32:09.008 "base_bdev": "nvme0n1", 00:32:09.008 "thin_provision": true, 00:32:09.008 "num_allocated_clusters": 0, 00:32:09.008 "snapshot": false, 00:32:09.008 "clone": false, 00:32:09.008 "esnap_clone": false 00:32:09.008 } 00:32:09.008 } 00:32:09.008 } 00:32:09.008 ]' 00:32:09.008 20:30:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:32:09.008 20:30:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:32:09.008 20:30:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:32:09.008 20:30:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:32:09.008 20:30:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:32:09.008 20:30:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:32:09.008 20:30:04 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:32:09.008 20:30:04 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 60492f8d-cf56-4110-8792-b800448ecabc -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:32:09.266 [2024-10-01 20:30:04.359205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.266 [2024-10-01 20:30:04.359257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:09.266 [2024-10-01 20:30:04.359275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:09.266 [2024-10-01 20:30:04.359283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.266 [2024-10-01 20:30:04.362183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.266 [2024-10-01 20:30:04.362224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:09.266 [2024-10-01 20:30:04.362236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.872 ms 00:32:09.266 [2024-10-01 20:30:04.362245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.266 [2024-10-01 20:30:04.362365] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:09.266 [2024-10-01 20:30:04.363074] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:09.266 [2024-10-01 20:30:04.363101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.266 [2024-10-01 20:30:04.363111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:09.266 [2024-10-01 20:30:04.363122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:32:09.266 [2024-10-01 20:30:04.363130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.266 [2024-10-01 20:30:04.363554] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bc42671f-8b04-4386-a6b4-5ad0168aad47 00:32:09.266 [2024-10-01 20:30:04.365029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.266 [2024-10-01 20:30:04.365061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:32:09.266 [2024-10-01 20:30:04.365073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:09.266 [2024-10-01 20:30:04.365086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.266 [2024-10-01 20:30:04.370993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.266 [2024-10-01 20:30:04.371033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:09.266 [2024-10-01 20:30:04.371045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.832 ms 00:32:09.266 [2024-10-01 20:30:04.371056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.266 [2024-10-01 20:30:04.371189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.266 [2024-10-01 20:30:04.371201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:09.266 [2024-10-01 20:30:04.371212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:32:09.266 [2024-10-01 20:30:04.371223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.266 [2024-10-01 20:30:04.371255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.266 [2024-10-01 20:30:04.371265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:09.266 [2024-10-01 20:30:04.371273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:09.266 [2024-10-01 20:30:04.371282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.266 [2024-10-01 20:30:04.371312] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:09.266 [2024-10-01 20:30:04.375143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.266 [2024-10-01 20:30:04.375180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:09.266 [2024-10-01 20:30:04.375194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.833 ms 00:32:09.266 [2024-10-01 20:30:04.375202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.266 [2024-10-01 20:30:04.375287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.266 [2024-10-01 20:30:04.375297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:09.266 [2024-10-01 20:30:04.375309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:09.266 [2024-10-01 20:30:04.375317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.267 [2024-10-01 20:30:04.375344] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:32:09.267 [2024-10-01 20:30:04.375481] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:09.267 [2024-10-01 20:30:04.375502] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:09.267 [2024-10-01 20:30:04.375527] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:09.267 [2024-10-01 20:30:04.375539] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:09.267 [2024-10-01 20:30:04.375548] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:09.267 [2024-10-01 20:30:04.375559] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:09.267 [2024-10-01 20:30:04.375571] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:09.267 [2024-10-01 20:30:04.375585] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:09.267 [2024-10-01 20:30:04.375596] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:09.267 [2024-10-01 20:30:04.375611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.267 [2024-10-01 20:30:04.375623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:09.267 [2024-10-01 20:30:04.375638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:32:09.267 [2024-10-01 20:30:04.375654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.267 [2024-10-01 20:30:04.375806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.267 [2024-10-01 20:30:04.375820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:09.267 [2024-10-01 20:30:04.375835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:32:09.267 [2024-10-01 20:30:04.375847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.267 [2024-10-01 20:30:04.375997] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:09.267 [2024-10-01 20:30:04.376010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:09.267 [2024-10-01 20:30:04.376020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:09.267 [2024-10-01 20:30:04.376028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.267 [2024-10-01 20:30:04.376037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:09.267 [2024-10-01 20:30:04.376049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:09.267 [2024-10-01 20:30:04.376063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:09.267 [2024-10-01 20:30:04.376075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:09.267 [2024-10-01 20:30:04.376088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:09.267 [2024-10-01 20:30:04.376095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:09.267 [2024-10-01 20:30:04.376103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:09.267 [2024-10-01 20:30:04.376110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:09.267 [2024-10-01 20:30:04.376118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:09.267 [2024-10-01 20:30:04.376125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:09.267 [2024-10-01 20:30:04.376133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:09.267 [2024-10-01 20:30:04.376139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.267 [2024-10-01 20:30:04.376149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:09.267 [2024-10-01 20:30:04.376156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:09.267 [2024-10-01 20:30:04.376164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.267 [2024-10-01 20:30:04.376172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:09.267 [2024-10-01 20:30:04.376182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:09.267 [2024-10-01 20:30:04.376190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.267 [2024-10-01 20:30:04.376199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:09.267 [2024-10-01 20:30:04.376207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:09.267 [2024-10-01 20:30:04.376215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.267 [2024-10-01 20:30:04.376221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:09.267 [2024-10-01 20:30:04.376236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:09.267 [2024-10-01 20:30:04.376242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.267 [2024-10-01 20:30:04.376250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:09.267 [2024-10-01 20:30:04.376257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:09.267 [2024-10-01 20:30:04.376265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.267 [2024-10-01 20:30:04.376272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:09.267 [2024-10-01 20:30:04.376281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:09.267 [2024-10-01 20:30:04.376288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:09.267 [2024-10-01 20:30:04.376296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:09.267 [2024-10-01 20:30:04.376302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:09.267 [2024-10-01 20:30:04.376311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:09.267 [2024-10-01 20:30:04.376317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:09.267 [2024-10-01 20:30:04.376325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:09.267 [2024-10-01 20:30:04.376332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.267 [2024-10-01 20:30:04.376340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:09.267 [2024-10-01 20:30:04.376346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:09.267 [2024-10-01 20:30:04.376354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.267 [2024-10-01 20:30:04.376360] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:09.267 [2024-10-01 20:30:04.376371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:09.267 [2024-10-01 20:30:04.376378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:09.267 [2024-10-01 20:30:04.376388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.267 [2024-10-01 20:30:04.376396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:09.267 [2024-10-01 20:30:04.376406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:09.267 [2024-10-01 20:30:04.376413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:09.267 [2024-10-01 20:30:04.376421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:09.268 [2024-10-01 20:30:04.376427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:09.268 [2024-10-01 20:30:04.376435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:09.268 [2024-10-01 20:30:04.376447] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:09.268 [2024-10-01 20:30:04.376459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:09.268 [2024-10-01 20:30:04.376468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:09.268 [2024-10-01 20:30:04.376476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:09.268 [2024-10-01 20:30:04.376484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:09.268 [2024-10-01 20:30:04.376492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:09.268 [2024-10-01 20:30:04.376500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:09.268 [2024-10-01 20:30:04.376509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:09.268 [2024-10-01 20:30:04.376516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:09.268 [2024-10-01 20:30:04.376524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:09.268 [2024-10-01 20:30:04.376531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:09.268 [2024-10-01 20:30:04.376541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:09.268 [2024-10-01 20:30:04.376548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:09.268 [2024-10-01 20:30:04.376557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:09.268 [2024-10-01 20:30:04.376564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:09.268 [2024-10-01 20:30:04.376573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:09.268 [2024-10-01 20:30:04.376580] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:09.268 [2024-10-01 20:30:04.376590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:09.268 [2024-10-01 20:30:04.376598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:09.268 [2024-10-01 20:30:04.376606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:09.268 [2024-10-01 20:30:04.376614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:09.268 [2024-10-01 20:30:04.376622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:09.268 [2024-10-01 20:30:04.376630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.268 [2024-10-01 20:30:04.376639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:09.268 [2024-10-01 20:30:04.376647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:32:09.268 [2024-10-01 20:30:04.376657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.268 [2024-10-01 20:30:04.376756] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:32:09.268 [2024-10-01 20:30:04.376771] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:32:11.789 [2024-10-01 20:30:06.392032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.392085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:32:11.789 [2024-10-01 20:30:06.392097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2015.260 ms 00:32:11.789 [2024-10-01 20:30:06.392105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.415604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.415655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:11.789 [2024-10-01 20:30:06.415668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.257 ms 00:32:11.789 [2024-10-01 20:30:06.415676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.415840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.415853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:11.789 [2024-10-01 20:30:06.415861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:11.789 [2024-10-01 20:30:06.415870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.442046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.442094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:11.789 [2024-10-01 20:30:06.442105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.145 ms 00:32:11.789 [2024-10-01 20:30:06.442113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.442198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.442207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:11.789 [2024-10-01 20:30:06.442214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:11.789 [2024-10-01 20:30:06.442222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.442540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.442555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:11.789 [2024-10-01 20:30:06.442563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:32:11.789 [2024-10-01 20:30:06.442570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.442682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.442705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:11.789 [2024-10-01 20:30:06.442712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:32:11.789 [2024-10-01 20:30:06.442725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.455202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.455241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:11.789 [2024-10-01 20:30:06.455251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.442 ms 00:32:11.789 [2024-10-01 20:30:06.455259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.464477] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:11.789 [2024-10-01 20:30:06.477747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.477788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:11.789 [2024-10-01 20:30:06.477800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.390 ms 00:32:11.789 [2024-10-01 20:30:06.477808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.538830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.538884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:32:11.789 [2024-10-01 20:30:06.538898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.941 ms 00:32:11.789 [2024-10-01 20:30:06.538906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.539120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.539130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:11.789 [2024-10-01 20:30:06.539141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:32:11.789 [2024-10-01 20:30:06.539148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.558663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.558712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:11.789 [2024-10-01 20:30:06.558726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.486 ms 00:32:11.789 [2024-10-01 20:30:06.558734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.577201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.577243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:11.789 [2024-10-01 20:30:06.577256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.393 ms 00:32:11.789 [2024-10-01 20:30:06.577263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.577777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.577794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:11.789 [2024-10-01 20:30:06.577806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:32:11.789 [2024-10-01 20:30:06.577813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.638575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.638639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:11.789 [2024-10-01 20:30:06.638655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.726 ms 00:32:11.789 [2024-10-01 20:30:06.638663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.659672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.659716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:11.789 [2024-10-01 20:30:06.659729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.875 ms 00:32:11.789 [2024-10-01 20:30:06.659736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.680360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.680405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:11.789 [2024-10-01 20:30:06.680419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.540 ms 00:32:11.789 [2024-10-01 20:30:06.680426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.700630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.700675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:11.789 [2024-10-01 20:30:06.700689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.104 ms 00:32:11.789 [2024-10-01 20:30:06.700702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.700834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.700848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:11.789 [2024-10-01 20:30:06.700863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:11.789 [2024-10-01 20:30:06.700886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.700959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.789 [2024-10-01 20:30:06.700968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:11.789 [2024-10-01 20:30:06.700977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:11.789 [2024-10-01 20:30:06.700983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.789 [2024-10-01 20:30:06.701742] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:11.789 [2024-10-01 20:30:06.704651] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2342.273 ms, result 0 00:32:11.789 [2024-10-01 20:30:06.705258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:11.789 { 00:32:11.789 "name": "ftl0", 00:32:11.789 "uuid": "bc42671f-8b04-4386-a6b4-5ad0168aad47" 00:32:11.789 } 00:32:11.789 20:30:06 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:32:11.789 20:30:06 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:32:11.790 20:30:06 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:11.790 20:30:06 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:32:11.790 20:30:06 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:11.790 20:30:06 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:11.790 20:30:06 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:11.790 20:30:06 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:32:12.131 [ 00:32:12.131 { 00:32:12.131 "name": "ftl0", 00:32:12.131 "aliases": [ 00:32:12.131 "bc42671f-8b04-4386-a6b4-5ad0168aad47" 00:32:12.131 ], 00:32:12.131 "product_name": "FTL disk", 00:32:12.131 "block_size": 4096, 00:32:12.131 "num_blocks": 23592960, 00:32:12.131 "uuid": "bc42671f-8b04-4386-a6b4-5ad0168aad47", 00:32:12.131 "assigned_rate_limits": { 00:32:12.131 "rw_ios_per_sec": 0, 00:32:12.131 "rw_mbytes_per_sec": 0, 00:32:12.131 "r_mbytes_per_sec": 0, 00:32:12.131 "w_mbytes_per_sec": 0 00:32:12.131 }, 00:32:12.131 "claimed": false, 00:32:12.131 "zoned": false, 00:32:12.131 "supported_io_types": { 00:32:12.131 "read": true, 00:32:12.131 "write": true, 00:32:12.131 "unmap": true, 00:32:12.131 "flush": true, 00:32:12.131 "reset": false, 00:32:12.131 "nvme_admin": false, 00:32:12.131 "nvme_io": false, 00:32:12.131 "nvme_io_md": false, 00:32:12.131 "write_zeroes": true, 00:32:12.131 "zcopy": false, 00:32:12.131 "get_zone_info": false, 00:32:12.131 "zone_management": false, 00:32:12.131 "zone_append": false, 00:32:12.131 "compare": false, 00:32:12.131 "compare_and_write": false, 00:32:12.131 "abort": false, 00:32:12.131 "seek_hole": false, 00:32:12.131 "seek_data": false, 00:32:12.131 "copy": false, 00:32:12.131 "nvme_iov_md": false 00:32:12.131 }, 00:32:12.131 "driver_specific": { 00:32:12.131 "ftl": { 00:32:12.131 "base_bdev": "60492f8d-cf56-4110-8792-b800448ecabc", 00:32:12.131 "cache": "nvc0n1p0" 00:32:12.131 } 00:32:12.131 } 00:32:12.131 } 00:32:12.131 ] 00:32:12.131 20:30:07 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:32:12.131 20:30:07 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:32:12.131 20:30:07 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:12.389 20:30:07 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:32:12.389 20:30:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:32:12.389 20:30:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:32:12.389 { 00:32:12.389 "name": "ftl0", 00:32:12.389 "aliases": [ 00:32:12.389 "bc42671f-8b04-4386-a6b4-5ad0168aad47" 00:32:12.389 ], 00:32:12.389 "product_name": "FTL disk", 00:32:12.389 "block_size": 4096, 00:32:12.389 "num_blocks": 23592960, 00:32:12.389 "uuid": "bc42671f-8b04-4386-a6b4-5ad0168aad47", 00:32:12.389 "assigned_rate_limits": { 00:32:12.389 "rw_ios_per_sec": 0, 00:32:12.389 "rw_mbytes_per_sec": 0, 00:32:12.389 "r_mbytes_per_sec": 0, 00:32:12.389 "w_mbytes_per_sec": 0 00:32:12.389 }, 00:32:12.389 "claimed": false, 00:32:12.389 "zoned": false, 00:32:12.389 "supported_io_types": { 00:32:12.389 "read": true, 00:32:12.389 "write": true, 00:32:12.389 "unmap": true, 00:32:12.389 "flush": true, 00:32:12.389 "reset": false, 00:32:12.389 "nvme_admin": false, 00:32:12.389 "nvme_io": false, 00:32:12.389 "nvme_io_md": false, 00:32:12.389 "write_zeroes": true, 00:32:12.389 "zcopy": false, 00:32:12.389 "get_zone_info": false, 00:32:12.389 "zone_management": false, 00:32:12.389 "zone_append": false, 00:32:12.389 "compare": false, 00:32:12.389 "compare_and_write": false, 00:32:12.389 "abort": false, 00:32:12.389 "seek_hole": false, 00:32:12.389 "seek_data": false, 00:32:12.389 "copy": false, 00:32:12.389 "nvme_iov_md": false 00:32:12.389 }, 00:32:12.389 "driver_specific": { 00:32:12.389 "ftl": { 00:32:12.389 "base_bdev": "60492f8d-cf56-4110-8792-b800448ecabc", 00:32:12.389 "cache": "nvc0n1p0" 00:32:12.389 } 00:32:12.389 } 00:32:12.389 } 00:32:12.389 ]' 00:32:12.389 20:30:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:32:12.389 20:30:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:32:12.389 20:30:07 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:12.695 [2024-10-01 20:30:07.786876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.695 [2024-10-01 20:30:07.786924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:12.695 [2024-10-01 20:30:07.786935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:12.695 [2024-10-01 20:30:07.786943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.695 [2024-10-01 20:30:07.786970] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:12.695 [2024-10-01 20:30:07.789141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.695 [2024-10-01 20:30:07.789173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:12.695 [2024-10-01 20:30:07.789187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.152 ms 00:32:12.695 [2024-10-01 20:30:07.789195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.695 [2024-10-01 20:30:07.789620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.695 [2024-10-01 20:30:07.789632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:12.695 [2024-10-01 20:30:07.789641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:32:12.695 [2024-10-01 20:30:07.789647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.695 [2024-10-01 20:30:07.792503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.695 [2024-10-01 20:30:07.792523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:12.695 [2024-10-01 20:30:07.792533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.831 ms 00:32:12.695 [2024-10-01 20:30:07.792539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.695 [2024-10-01 20:30:07.798432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.695 [2024-10-01 20:30:07.798475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:12.695 [2024-10-01 20:30:07.798487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.842 ms 00:32:12.695 [2024-10-01 20:30:07.798495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.695 [2024-10-01 20:30:07.818729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.695 [2024-10-01 20:30:07.818776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:12.695 [2024-10-01 20:30:07.818791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.132 ms 00:32:12.695 [2024-10-01 20:30:07.818798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.695 [2024-10-01 20:30:07.832068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.695 [2024-10-01 20:30:07.832118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:12.695 [2024-10-01 20:30:07.832131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.180 ms 00:32:12.695 [2024-10-01 20:30:07.832139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.695 [2024-10-01 20:30:07.832345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.695 [2024-10-01 20:30:07.832357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:12.695 [2024-10-01 20:30:07.832366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:32:12.695 [2024-10-01 20:30:07.832374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.695 [2024-10-01 20:30:07.853216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.695 [2024-10-01 20:30:07.853269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:12.695 [2024-10-01 20:30:07.853282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.814 ms 00:32:12.695 [2024-10-01 20:30:07.853289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.695 [2024-10-01 20:30:07.872859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.695 [2024-10-01 20:30:07.872905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:12.695 [2024-10-01 20:30:07.872920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.469 ms 00:32:12.695 [2024-10-01 20:30:07.872928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.955 [2024-10-01 20:30:07.892236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.955 [2024-10-01 20:30:07.892284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:12.955 [2024-10-01 20:30:07.892297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.227 ms 00:32:12.955 [2024-10-01 20:30:07.892303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.955 [2024-10-01 20:30:07.912147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.955 [2024-10-01 20:30:07.912193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:12.955 [2024-10-01 20:30:07.912207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.723 ms 00:32:12.955 [2024-10-01 20:30:07.912214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.955 [2024-10-01 20:30:07.912302] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:12.955 [2024-10-01 20:30:07.912319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:12.955 [2024-10-01 20:30:07.912919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.912927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.912932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.912939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.912945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.912954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.912960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.912967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.912972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.912979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.912985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.912992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.913003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.913011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.913017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.913024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.913030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.913038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.913044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.913052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:12.956 [2024-10-01 20:30:07.913066] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:12.956 [2024-10-01 20:30:07.913079] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc42671f-8b04-4386-a6b4-5ad0168aad47 00:32:12.956 [2024-10-01 20:30:07.913086] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:12.956 [2024-10-01 20:30:07.913093] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:12.956 [2024-10-01 20:30:07.913098] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:12.956 [2024-10-01 20:30:07.913106] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:12.956 [2024-10-01 20:30:07.913111] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:12.956 [2024-10-01 20:30:07.913118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:12.956 [2024-10-01 20:30:07.913124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:12.956 [2024-10-01 20:30:07.913130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:12.956 [2024-10-01 20:30:07.913135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:12.956 [2024-10-01 20:30:07.913143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.956 [2024-10-01 20:30:07.913149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:12.956 [2024-10-01 20:30:07.913159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:32:12.956 [2024-10-01 20:30:07.913165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:07.923733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.956 [2024-10-01 20:30:07.923776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:12.956 [2024-10-01 20:30:07.923792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.533 ms 00:32:12.956 [2024-10-01 20:30:07.923799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:07.924135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.956 [2024-10-01 20:30:07.924155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:12.956 [2024-10-01 20:30:07.924164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:32:12.956 [2024-10-01 20:30:07.924170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:07.959797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.956 [2024-10-01 20:30:07.959848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:12.956 [2024-10-01 20:30:07.959860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.956 [2024-10-01 20:30:07.959867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:07.959974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.956 [2024-10-01 20:30:07.959984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:12.956 [2024-10-01 20:30:07.959992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.956 [2024-10-01 20:30:07.959999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:07.960052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.956 [2024-10-01 20:30:07.960059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:12.956 [2024-10-01 20:30:07.960068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.956 [2024-10-01 20:30:07.960075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:07.960103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.956 [2024-10-01 20:30:07.960109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:12.956 [2024-10-01 20:30:07.960118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.956 [2024-10-01 20:30:07.960124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:08.026955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.956 [2024-10-01 20:30:08.027004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:12.956 [2024-10-01 20:30:08.027016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.956 [2024-10-01 20:30:08.027024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:08.079424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.956 [2024-10-01 20:30:08.079476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:12.956 [2024-10-01 20:30:08.079487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.956 [2024-10-01 20:30:08.079493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:08.079570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.956 [2024-10-01 20:30:08.079578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:12.956 [2024-10-01 20:30:08.079589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.956 [2024-10-01 20:30:08.079595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:08.079642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.956 [2024-10-01 20:30:08.079648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:12.956 [2024-10-01 20:30:08.079667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.956 [2024-10-01 20:30:08.079674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:08.079798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.956 [2024-10-01 20:30:08.079809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:12.956 [2024-10-01 20:30:08.079817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.956 [2024-10-01 20:30:08.079823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:08.079871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.956 [2024-10-01 20:30:08.079878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:12.956 [2024-10-01 20:30:08.079886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.956 [2024-10-01 20:30:08.079892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:08.079934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.956 [2024-10-01 20:30:08.079941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:12.956 [2024-10-01 20:30:08.079950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.956 [2024-10-01 20:30:08.079956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:08.080007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.956 [2024-10-01 20:30:08.080014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:12.956 [2024-10-01 20:30:08.080022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.956 [2024-10-01 20:30:08.080031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.956 [2024-10-01 20:30:08.080173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 293.289 ms, result 0 00:32:12.956 true 00:32:12.956 20:30:08 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 74485 00:32:12.956 20:30:08 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74485 ']' 00:32:12.956 20:30:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74485 00:32:12.956 20:30:08 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:32:12.956 20:30:08 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:12.956 20:30:08 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74485 00:32:12.956 killing process with pid 74485 00:32:12.956 20:30:08 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:12.956 20:30:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:12.956 20:30:08 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74485' 00:32:12.956 20:30:08 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74485 00:32:12.956 20:30:08 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74485 00:32:25.156 20:30:18 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:32:25.156 65536+0 records in 00:32:25.156 65536+0 records out 00:32:25.156 268435456 bytes (268 MB, 256 MiB) copied, 1.10976 s, 242 MB/s 00:32:25.156 20:30:19 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:25.156 [2024-10-01 20:30:19.989460] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:32:25.156 [2024-10-01 20:30:19.989949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74669 ] 00:32:25.156 [2024-10-01 20:30:20.141843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.156 [2024-10-01 20:30:20.341381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.720 [2024-10-01 20:30:20.790792] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:25.720 [2024-10-01 20:30:20.790868] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:25.978 [2024-10-01 20:30:20.945735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.978 [2024-10-01 20:30:20.945798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:25.978 [2024-10-01 20:30:20.945814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:25.978 [2024-10-01 20:30:20.945823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.978 [2024-10-01 20:30:20.948596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.978 [2024-10-01 20:30:20.948864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:25.978 [2024-10-01 20:30:20.948884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.752 ms 00:32:25.978 [2024-10-01 20:30:20.948898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.978 [2024-10-01 20:30:20.949064] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:25.978 [2024-10-01 20:30:20.949837] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:25.978 [2024-10-01 20:30:20.949862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.978 [2024-10-01 20:30:20.949873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:25.978 [2024-10-01 20:30:20.949882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:32:25.978 [2024-10-01 20:30:20.949890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.978 [2024-10-01 20:30:20.951041] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:25.978 [2024-10-01 20:30:20.963959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.978 [2024-10-01 20:30:20.964204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:25.978 [2024-10-01 20:30:20.964225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.917 ms 00:32:25.978 [2024-10-01 20:30:20.964234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.978 [2024-10-01 20:30:20.964362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.978 [2024-10-01 20:30:20.964373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:25.978 [2024-10-01 20:30:20.964386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:25.978 [2024-10-01 20:30:20.964393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.978 [2024-10-01 20:30:20.969995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.978 [2024-10-01 20:30:20.970038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:25.978 [2024-10-01 20:30:20.970049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.548 ms 00:32:25.978 [2024-10-01 20:30:20.970057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.978 [2024-10-01 20:30:20.970167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.978 [2024-10-01 20:30:20.970179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:25.978 [2024-10-01 20:30:20.970187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:25.978 [2024-10-01 20:30:20.970194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.978 [2024-10-01 20:30:20.970220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.978 [2024-10-01 20:30:20.970228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:25.978 [2024-10-01 20:30:20.970235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:25.978 [2024-10-01 20:30:20.970243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.978 [2024-10-01 20:30:20.970265] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:25.978 [2024-10-01 20:30:20.973816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.978 [2024-10-01 20:30:20.973852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:25.978 [2024-10-01 20:30:20.973863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.556 ms 00:32:25.978 [2024-10-01 20:30:20.973870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.978 [2024-10-01 20:30:20.973917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.978 [2024-10-01 20:30:20.973929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:25.978 [2024-10-01 20:30:20.973938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:25.978 [2024-10-01 20:30:20.973944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.978 [2024-10-01 20:30:20.973963] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:25.978 [2024-10-01 20:30:20.973980] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:25.978 [2024-10-01 20:30:20.974014] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:25.978 [2024-10-01 20:30:20.974028] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:25.978 [2024-10-01 20:30:20.974132] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:25.978 [2024-10-01 20:30:20.974142] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:25.978 [2024-10-01 20:30:20.974152] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:25.978 [2024-10-01 20:30:20.974162] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:25.978 [2024-10-01 20:30:20.974171] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:25.978 [2024-10-01 20:30:20.974179] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:25.978 [2024-10-01 20:30:20.974186] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:25.978 [2024-10-01 20:30:20.974193] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:25.978 [2024-10-01 20:30:20.974200] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:25.978 [2024-10-01 20:30:20.974208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.978 [2024-10-01 20:30:20.974217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:25.978 [2024-10-01 20:30:20.974226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:32:25.978 [2024-10-01 20:30:20.974232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.978 [2024-10-01 20:30:20.974338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.978 [2024-10-01 20:30:20.974346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:25.978 [2024-10-01 20:30:20.974354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:25.978 [2024-10-01 20:30:20.974361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.978 [2024-10-01 20:30:20.974484] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:25.978 [2024-10-01 20:30:20.974495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:25.978 [2024-10-01 20:30:20.974505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:25.978 [2024-10-01 20:30:20.974512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:25.978 [2024-10-01 20:30:20.974520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:25.978 [2024-10-01 20:30:20.974527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:25.978 [2024-10-01 20:30:20.974534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:25.978 [2024-10-01 20:30:20.974541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:25.978 [2024-10-01 20:30:20.974549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:25.978 [2024-10-01 20:30:20.974555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:25.978 [2024-10-01 20:30:20.974562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:25.978 [2024-10-01 20:30:20.974574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:25.978 [2024-10-01 20:30:20.974580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:25.978 [2024-10-01 20:30:20.974586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:25.978 [2024-10-01 20:30:20.974593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:25.978 [2024-10-01 20:30:20.974601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:25.979 [2024-10-01 20:30:20.974608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:25.979 [2024-10-01 20:30:20.974614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:25.979 [2024-10-01 20:30:20.974621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:25.979 [2024-10-01 20:30:20.974627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:25.979 [2024-10-01 20:30:20.974634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:25.979 [2024-10-01 20:30:20.974640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:25.979 [2024-10-01 20:30:20.974647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:25.979 [2024-10-01 20:30:20.974653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:25.979 [2024-10-01 20:30:20.974659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:25.979 [2024-10-01 20:30:20.974666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:25.979 [2024-10-01 20:30:20.974672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:25.979 [2024-10-01 20:30:20.974678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:25.979 [2024-10-01 20:30:20.974685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:25.979 [2024-10-01 20:30:20.974710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:25.979 [2024-10-01 20:30:20.974717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:25.979 [2024-10-01 20:30:20.974724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:25.979 [2024-10-01 20:30:20.974731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:25.979 [2024-10-01 20:30:20.974737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:25.979 [2024-10-01 20:30:20.974744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:25.979 [2024-10-01 20:30:20.974750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:25.979 [2024-10-01 20:30:20.974756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:25.979 [2024-10-01 20:30:20.974763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:25.979 [2024-10-01 20:30:20.974770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:25.979 [2024-10-01 20:30:20.974776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:25.979 [2024-10-01 20:30:20.974783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:25.979 [2024-10-01 20:30:20.974790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:25.979 [2024-10-01 20:30:20.974797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:25.979 [2024-10-01 20:30:20.974803] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:25.979 [2024-10-01 20:30:20.974811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:25.979 [2024-10-01 20:30:20.974818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:25.979 [2024-10-01 20:30:20.974824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:25.979 [2024-10-01 20:30:20.974833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:25.979 [2024-10-01 20:30:20.974840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:25.979 [2024-10-01 20:30:20.974846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:25.979 [2024-10-01 20:30:20.974853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:25.979 [2024-10-01 20:30:20.974859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:25.979 [2024-10-01 20:30:20.974865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:25.979 [2024-10-01 20:30:20.974873] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:25.979 [2024-10-01 20:30:20.974887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:25.979 [2024-10-01 20:30:20.974895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:25.979 [2024-10-01 20:30:20.974902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:25.979 [2024-10-01 20:30:20.974909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:25.979 [2024-10-01 20:30:20.974916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:25.979 [2024-10-01 20:30:20.974923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:25.979 [2024-10-01 20:30:20.974930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:25.979 [2024-10-01 20:30:20.974937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:25.979 [2024-10-01 20:30:20.974944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:25.979 [2024-10-01 20:30:20.974951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:25.979 [2024-10-01 20:30:20.974958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:25.979 [2024-10-01 20:30:20.974965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:25.979 [2024-10-01 20:30:20.974972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:25.979 [2024-10-01 20:30:20.974979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:25.979 [2024-10-01 20:30:20.974986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:25.979 [2024-10-01 20:30:20.974993] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:25.979 [2024-10-01 20:30:20.975002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:25.979 [2024-10-01 20:30:20.975010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:25.979 [2024-10-01 20:30:20.975017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:25.979 [2024-10-01 20:30:20.975023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:25.979 [2024-10-01 20:30:20.975030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:25.979 [2024-10-01 20:30:20.975037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.979 [2024-10-01 20:30:20.975047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:25.979 [2024-10-01 20:30:20.975054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:32:25.979 [2024-10-01 20:30:20.975061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.979 [2024-10-01 20:30:21.002160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.979 [2024-10-01 20:30:21.002217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:25.979 [2024-10-01 20:30:21.002231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.045 ms 00:32:25.979 [2024-10-01 20:30:21.002239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.979 [2024-10-01 20:30:21.002389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.979 [2024-10-01 20:30:21.002400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:25.979 [2024-10-01 20:30:21.002408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:25.979 [2024-10-01 20:30:21.002416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.979 [2024-10-01 20:30:21.033060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.979 [2024-10-01 20:30:21.033110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:25.979 [2024-10-01 20:30:21.033122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.621 ms 00:32:25.979 [2024-10-01 20:30:21.033130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.979 [2024-10-01 20:30:21.033218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.979 [2024-10-01 20:30:21.033227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:25.979 [2024-10-01 20:30:21.033236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:25.979 [2024-10-01 20:30:21.033244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.979 [2024-10-01 20:30:21.033578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.979 [2024-10-01 20:30:21.033594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:25.979 [2024-10-01 20:30:21.033602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:32:25.979 [2024-10-01 20:30:21.033610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.979 [2024-10-01 20:30:21.033780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.979 [2024-10-01 20:30:21.033791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:25.979 [2024-10-01 20:30:21.033799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:32:25.979 [2024-10-01 20:30:21.033806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.979 [2024-10-01 20:30:21.046712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.979 [2024-10-01 20:30:21.046757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:25.979 [2024-10-01 20:30:21.046768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.885 ms 00:32:25.979 [2024-10-01 20:30:21.046775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.979 [2024-10-01 20:30:21.059185] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:25.979 [2024-10-01 20:30:21.059235] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:25.979 [2024-10-01 20:30:21.059252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.979 [2024-10-01 20:30:21.059260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:25.979 [2024-10-01 20:30:21.059270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.332 ms 00:32:25.979 [2024-10-01 20:30:21.059278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.979 [2024-10-01 20:30:21.084343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.979 [2024-10-01 20:30:21.084398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:25.979 [2024-10-01 20:30:21.084411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.959 ms 00:32:25.979 [2024-10-01 20:30:21.084428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.979 [2024-10-01 20:30:21.096747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.980 [2024-10-01 20:30:21.096806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:25.980 [2024-10-01 20:30:21.096819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.211 ms 00:32:25.980 [2024-10-01 20:30:21.096826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.980 [2024-10-01 20:30:21.108879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.980 [2024-10-01 20:30:21.108932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:25.980 [2024-10-01 20:30:21.108945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.952 ms 00:32:25.980 [2024-10-01 20:30:21.108953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.980 [2024-10-01 20:30:21.109635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.980 [2024-10-01 20:30:21.109654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:25.980 [2024-10-01 20:30:21.109665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:32:25.980 [2024-10-01 20:30:21.109672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.980 [2024-10-01 20:30:21.167052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.980 [2024-10-01 20:30:21.167126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:25.980 [2024-10-01 20:30:21.167140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.343 ms 00:32:25.980 [2024-10-01 20:30:21.167147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.980 [2024-10-01 20:30:21.178346] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:26.237 [2024-10-01 20:30:21.193458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.237 [2024-10-01 20:30:21.193515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:26.237 [2024-10-01 20:30:21.193527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.179 ms 00:32:26.237 [2024-10-01 20:30:21.193535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.237 [2024-10-01 20:30:21.193627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.237 [2024-10-01 20:30:21.193637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:26.237 [2024-10-01 20:30:21.193646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:26.237 [2024-10-01 20:30:21.193654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.237 [2024-10-01 20:30:21.193732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.237 [2024-10-01 20:30:21.193742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:26.237 [2024-10-01 20:30:21.193752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:32:26.237 [2024-10-01 20:30:21.193771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.237 [2024-10-01 20:30:21.193796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.237 [2024-10-01 20:30:21.193804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:26.237 [2024-10-01 20:30:21.193812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:26.237 [2024-10-01 20:30:21.193819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.237 [2024-10-01 20:30:21.193847] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:26.237 [2024-10-01 20:30:21.193856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.237 [2024-10-01 20:30:21.193864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:26.237 [2024-10-01 20:30:21.193871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:26.237 [2024-10-01 20:30:21.193880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.237 [2024-10-01 20:30:21.219609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.237 [2024-10-01 20:30:21.219666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:26.237 [2024-10-01 20:30:21.219679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.707 ms 00:32:26.237 [2024-10-01 20:30:21.219688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.237 [2024-10-01 20:30:21.219832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.237 [2024-10-01 20:30:21.219843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:26.237 [2024-10-01 20:30:21.219856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:26.237 [2024-10-01 20:30:21.219863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.237 [2024-10-01 20:30:21.220939] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:26.237 [2024-10-01 20:30:21.224936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.902 ms, result 0 00:32:26.237 [2024-10-01 20:30:21.225669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:26.237 [2024-10-01 20:30:21.239421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:32.000  Copying: 40/256 [MB] (40 MBps) Copying: 84/256 [MB] (43 MBps) Copying: 130/256 [MB] (46 MBps) Copying: 174/256 [MB] (44 MBps) Copying: 216/256 [MB] (42 MBps) Copying: 256/256 [MB] (average 43 MBps)[2024-10-01 20:30:27.153397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:32.000 [2024-10-01 20:30:27.162608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.000 [2024-10-01 20:30:27.162643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:32.000 [2024-10-01 20:30:27.162656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:32.000 [2024-10-01 20:30:27.162664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.000 [2024-10-01 20:30:27.162684] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:32.000 [2024-10-01 20:30:27.165243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.000 [2024-10-01 20:30:27.165271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:32.000 [2024-10-01 20:30:27.165281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.536 ms 00:32:32.000 [2024-10-01 20:30:27.165290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.000 [2024-10-01 20:30:27.166745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.000 [2024-10-01 20:30:27.166772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:32.000 [2024-10-01 20:30:27.166786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:32:32.000 [2024-10-01 20:30:27.166794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.000 [2024-10-01 20:30:27.173577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.000 [2024-10-01 20:30:27.173603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:32.000 [2024-10-01 20:30:27.173612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.767 ms 00:32:32.000 [2024-10-01 20:30:27.173620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.000 [2024-10-01 20:30:27.180545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.000 [2024-10-01 20:30:27.180572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:32.000 [2024-10-01 20:30:27.180582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.885 ms 00:32:32.000 [2024-10-01 20:30:27.180596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.000 [2024-10-01 20:30:27.204470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.000 [2024-10-01 20:30:27.204504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:32.000 [2024-10-01 20:30:27.204517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.832 ms 00:32:32.000 [2024-10-01 20:30:27.204525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.258 [2024-10-01 20:30:27.218864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.258 [2024-10-01 20:30:27.218896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:32.258 [2024-10-01 20:30:27.218908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.303 ms 00:32:32.258 [2024-10-01 20:30:27.218916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.258 [2024-10-01 20:30:27.219052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.258 [2024-10-01 20:30:27.219061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:32.258 [2024-10-01 20:30:27.219070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:32:32.258 [2024-10-01 20:30:27.219077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.258 [2024-10-01 20:30:27.242896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.258 [2024-10-01 20:30:27.242943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:32.258 [2024-10-01 20:30:27.242954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.802 ms 00:32:32.258 [2024-10-01 20:30:27.242962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.258 [2024-10-01 20:30:27.266135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.258 [2024-10-01 20:30:27.266175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:32.258 [2024-10-01 20:30:27.266187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.132 ms 00:32:32.258 [2024-10-01 20:30:27.266195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.258 [2024-10-01 20:30:27.289675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.258 [2024-10-01 20:30:27.289726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:32.258 [2024-10-01 20:30:27.289738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.435 ms 00:32:32.258 [2024-10-01 20:30:27.289745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.258 [2024-10-01 20:30:27.314520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.258 [2024-10-01 20:30:27.314574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:32.258 [2024-10-01 20:30:27.314587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.692 ms 00:32:32.258 [2024-10-01 20:30:27.314595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.258 [2024-10-01 20:30:27.314652] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:32.258 [2024-10-01 20:30:27.314667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:32.258 [2024-10-01 20:30:27.314678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:32.258 [2024-10-01 20:30:27.314686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:32.258 [2024-10-01 20:30:27.314703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:32.258 [2024-10-01 20:30:27.314711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:32.258 [2024-10-01 20:30:27.314719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:32.258 [2024-10-01 20:30:27.314727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:32.258 [2024-10-01 20:30:27.314735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:32.258 [2024-10-01 20:30:27.314743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:32.258 [2024-10-01 20:30:27.314750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:32.258 [2024-10-01 20:30:27.314758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:32.258 [2024-10-01 20:30:27.314766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:32.258 [2024-10-01 20:30:27.314773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.314999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:32.259 [2024-10-01 20:30:27.315442] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:32.259 [2024-10-01 20:30:27.315450] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc42671f-8b04-4386-a6b4-5ad0168aad47 00:32:32.259 [2024-10-01 20:30:27.315458] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:32.259 [2024-10-01 20:30:27.315465] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:32.260 [2024-10-01 20:30:27.315473] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:32.260 [2024-10-01 20:30:27.315480] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:32.260 [2024-10-01 20:30:27.315490] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:32.260 [2024-10-01 20:30:27.315498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:32.260 [2024-10-01 20:30:27.315505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:32.260 [2024-10-01 20:30:27.315511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:32.260 [2024-10-01 20:30:27.315517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:32.260 [2024-10-01 20:30:27.315524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.260 [2024-10-01 20:30:27.315532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:32.260 [2024-10-01 20:30:27.315540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:32:32.260 [2024-10-01 20:30:27.315548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.260 [2024-10-01 20:30:27.328282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.260 [2024-10-01 20:30:27.328323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:32.260 [2024-10-01 20:30:27.328342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.694 ms 00:32:32.260 [2024-10-01 20:30:27.328350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.260 [2024-10-01 20:30:27.328733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.260 [2024-10-01 20:30:27.328747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:32.260 [2024-10-01 20:30:27.328756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:32:32.260 [2024-10-01 20:30:27.328764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.260 [2024-10-01 20:30:27.359185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.260 [2024-10-01 20:30:27.359238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:32.260 [2024-10-01 20:30:27.359249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.260 [2024-10-01 20:30:27.359257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.260 [2024-10-01 20:30:27.359344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.260 [2024-10-01 20:30:27.359352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:32.260 [2024-10-01 20:30:27.359359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.260 [2024-10-01 20:30:27.359366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.260 [2024-10-01 20:30:27.359409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.260 [2024-10-01 20:30:27.359418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:32.260 [2024-10-01 20:30:27.359429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.260 [2024-10-01 20:30:27.359437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.260 [2024-10-01 20:30:27.359454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.260 [2024-10-01 20:30:27.359462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:32.260 [2024-10-01 20:30:27.359469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.260 [2024-10-01 20:30:27.359476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.260 [2024-10-01 20:30:27.437937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.260 [2024-10-01 20:30:27.437988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:32.260 [2024-10-01 20:30:27.438007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.260 [2024-10-01 20:30:27.438015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.517 [2024-10-01 20:30:27.501349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.517 [2024-10-01 20:30:27.501396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:32.517 [2024-10-01 20:30:27.501407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.517 [2024-10-01 20:30:27.501415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.517 [2024-10-01 20:30:27.501471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.517 [2024-10-01 20:30:27.501480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:32.517 [2024-10-01 20:30:27.501489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.517 [2024-10-01 20:30:27.501501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.517 [2024-10-01 20:30:27.501529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.517 [2024-10-01 20:30:27.501537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:32.517 [2024-10-01 20:30:27.501544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.517 [2024-10-01 20:30:27.501552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.517 [2024-10-01 20:30:27.501640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.517 [2024-10-01 20:30:27.501650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:32.517 [2024-10-01 20:30:27.501658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.517 [2024-10-01 20:30:27.501665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.517 [2024-10-01 20:30:27.501717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.517 [2024-10-01 20:30:27.501727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:32.517 [2024-10-01 20:30:27.501738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.517 [2024-10-01 20:30:27.501750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.517 [2024-10-01 20:30:27.501796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.517 [2024-10-01 20:30:27.501810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:32.517 [2024-10-01 20:30:27.501820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.517 [2024-10-01 20:30:27.501828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.517 [2024-10-01 20:30:27.501874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.517 [2024-10-01 20:30:27.501884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:32.517 [2024-10-01 20:30:27.501891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.517 [2024-10-01 20:30:27.501898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.517 [2024-10-01 20:30:27.502030] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.418 ms, result 0 00:32:33.888 00:32:33.888 00:32:33.888 20:30:29 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=74766 00:32:33.888 20:30:29 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 74766 00:32:33.888 20:30:29 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:32:33.888 20:30:29 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74766 ']' 00:32:33.888 20:30:29 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.888 20:30:29 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:33.888 20:30:29 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.888 20:30:29 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:33.888 20:30:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:32:34.144 [2024-10-01 20:30:29.128670] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:32:34.144 [2024-10-01 20:30:29.128812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74766 ] 00:32:34.144 [2024-10-01 20:30:29.280628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.401 [2024-10-01 20:30:29.475840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.335 20:30:30 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:35.335 20:30:30 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:32:35.335 20:30:30 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:32:35.335 [2024-10-01 20:30:30.466310] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:35.335 [2024-10-01 20:30:30.466382] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:35.596 [2024-10-01 20:30:30.636194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.596 [2024-10-01 20:30:30.636247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:35.596 [2024-10-01 20:30:30.636264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:35.596 [2024-10-01 20:30:30.636272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.596 [2024-10-01 20:30:30.638912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.596 [2024-10-01 20:30:30.638950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:35.596 [2024-10-01 20:30:30.638962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.621 ms 00:32:35.596 [2024-10-01 20:30:30.638970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.596 [2024-10-01 20:30:30.639040] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:35.596 [2024-10-01 20:30:30.639754] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:35.596 [2024-10-01 20:30:30.639781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.596 [2024-10-01 20:30:30.639789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:35.596 [2024-10-01 20:30:30.639799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:32:35.596 [2024-10-01 20:30:30.639807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.596 [2024-10-01 20:30:30.641526] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:35.596 [2024-10-01 20:30:30.654114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.596 [2024-10-01 20:30:30.654168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:35.596 [2024-10-01 20:30:30.654182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.591 ms 00:32:35.596 [2024-10-01 20:30:30.654192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.596 [2024-10-01 20:30:30.654282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.596 [2024-10-01 20:30:30.654295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:35.596 [2024-10-01 20:30:30.654304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:35.596 [2024-10-01 20:30:30.654313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.596 [2024-10-01 20:30:30.659297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.596 [2024-10-01 20:30:30.659335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:35.596 [2024-10-01 20:30:30.659345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.937 ms 00:32:35.596 [2024-10-01 20:30:30.659356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.596 [2024-10-01 20:30:30.659458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.596 [2024-10-01 20:30:30.659470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:35.596 [2024-10-01 20:30:30.659479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:35.596 [2024-10-01 20:30:30.659487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.596 [2024-10-01 20:30:30.659512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.596 [2024-10-01 20:30:30.659523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:35.596 [2024-10-01 20:30:30.659531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:35.596 [2024-10-01 20:30:30.659539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.596 [2024-10-01 20:30:30.659564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:35.596 [2024-10-01 20:30:30.662867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.596 [2024-10-01 20:30:30.662895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:35.596 [2024-10-01 20:30:30.662907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.307 ms 00:32:35.596 [2024-10-01 20:30:30.662914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.596 [2024-10-01 20:30:30.662950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.596 [2024-10-01 20:30:30.662958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:35.596 [2024-10-01 20:30:30.662967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:35.596 [2024-10-01 20:30:30.662974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.596 [2024-10-01 20:30:30.662995] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:35.596 [2024-10-01 20:30:30.663011] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:35.596 [2024-10-01 20:30:30.663051] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:35.596 [2024-10-01 20:30:30.663067] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:35.596 [2024-10-01 20:30:30.663172] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:35.596 [2024-10-01 20:30:30.663189] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:35.596 [2024-10-01 20:30:30.663201] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:35.596 [2024-10-01 20:30:30.663211] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:35.596 [2024-10-01 20:30:30.663221] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:35.596 [2024-10-01 20:30:30.663230] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:35.596 [2024-10-01 20:30:30.663239] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:35.596 [2024-10-01 20:30:30.663246] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:35.596 [2024-10-01 20:30:30.663258] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:35.596 [2024-10-01 20:30:30.663265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.596 [2024-10-01 20:30:30.663273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:35.596 [2024-10-01 20:30:30.663281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:32:35.596 [2024-10-01 20:30:30.663289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.596 [2024-10-01 20:30:30.663375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.596 [2024-10-01 20:30:30.663390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:35.596 [2024-10-01 20:30:30.663398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:35.596 [2024-10-01 20:30:30.663407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.596 [2024-10-01 20:30:30.663520] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:35.596 [2024-10-01 20:30:30.663531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:35.596 [2024-10-01 20:30:30.663539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:35.596 [2024-10-01 20:30:30.663548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.596 [2024-10-01 20:30:30.663556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:35.596 [2024-10-01 20:30:30.663564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:35.596 [2024-10-01 20:30:30.663572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:35.596 [2024-10-01 20:30:30.663583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:35.596 [2024-10-01 20:30:30.663590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:35.596 [2024-10-01 20:30:30.663598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:35.596 [2024-10-01 20:30:30.663605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:35.596 [2024-10-01 20:30:30.663613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:35.596 [2024-10-01 20:30:30.663620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:35.596 [2024-10-01 20:30:30.663628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:35.596 [2024-10-01 20:30:30.663634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:35.596 [2024-10-01 20:30:30.663643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.596 [2024-10-01 20:30:30.663649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:35.596 [2024-10-01 20:30:30.663657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:35.596 [2024-10-01 20:30:30.663669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.596 [2024-10-01 20:30:30.663677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:35.596 [2024-10-01 20:30:30.663684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:35.596 [2024-10-01 20:30:30.663704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:35.596 [2024-10-01 20:30:30.663712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:35.596 [2024-10-01 20:30:30.663721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:35.596 [2024-10-01 20:30:30.663727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:35.596 [2024-10-01 20:30:30.663735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:35.596 [2024-10-01 20:30:30.663742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:35.596 [2024-10-01 20:30:30.663750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:35.596 [2024-10-01 20:30:30.663757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:35.596 [2024-10-01 20:30:30.663765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:35.596 [2024-10-01 20:30:30.663771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:35.596 [2024-10-01 20:30:30.663781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:35.596 [2024-10-01 20:30:30.663787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:35.596 [2024-10-01 20:30:30.663795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:35.596 [2024-10-01 20:30:30.663802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:35.596 [2024-10-01 20:30:30.663810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:35.596 [2024-10-01 20:30:30.663817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:35.596 [2024-10-01 20:30:30.663825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:35.597 [2024-10-01 20:30:30.663831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:35.597 [2024-10-01 20:30:30.663840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.597 [2024-10-01 20:30:30.663846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:35.597 [2024-10-01 20:30:30.663855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:35.597 [2024-10-01 20:30:30.663861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.597 [2024-10-01 20:30:30.663869] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:35.597 [2024-10-01 20:30:30.663876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:35.597 [2024-10-01 20:30:30.663889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:35.597 [2024-10-01 20:30:30.663896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:35.597 [2024-10-01 20:30:30.663909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:35.597 [2024-10-01 20:30:30.663915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:35.597 [2024-10-01 20:30:30.663923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:35.597 [2024-10-01 20:30:30.663931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:35.597 [2024-10-01 20:30:30.663939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:35.597 [2024-10-01 20:30:30.663946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:35.597 [2024-10-01 20:30:30.663955] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:35.597 [2024-10-01 20:30:30.663964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:35.597 [2024-10-01 20:30:30.663978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:35.597 [2024-10-01 20:30:30.663985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:35.597 [2024-10-01 20:30:30.663994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:35.597 [2024-10-01 20:30:30.664001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:35.597 [2024-10-01 20:30:30.664009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:35.597 [2024-10-01 20:30:30.664016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:35.597 [2024-10-01 20:30:30.664024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:35.597 [2024-10-01 20:30:30.664030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:35.597 [2024-10-01 20:30:30.664039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:35.597 [2024-10-01 20:30:30.664045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:35.597 [2024-10-01 20:30:30.664053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:35.597 [2024-10-01 20:30:30.664061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:35.597 [2024-10-01 20:30:30.664069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:35.597 [2024-10-01 20:30:30.664076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:35.597 [2024-10-01 20:30:30.664084] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:35.597 [2024-10-01 20:30:30.664092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:35.597 [2024-10-01 20:30:30.664103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:35.597 [2024-10-01 20:30:30.664109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:35.597 [2024-10-01 20:30:30.664118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:35.597 [2024-10-01 20:30:30.664125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:35.597 [2024-10-01 20:30:30.664133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.597 [2024-10-01 20:30:30.664141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:35.597 [2024-10-01 20:30:30.664149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:32:35.597 [2024-10-01 20:30:30.664156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.597 [2024-10-01 20:30:30.691230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.597 [2024-10-01 20:30:30.691266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:35.597 [2024-10-01 20:30:30.691279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.010 ms 00:32:35.597 [2024-10-01 20:30:30.691287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.597 [2024-10-01 20:30:30.691411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.597 [2024-10-01 20:30:30.691421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:35.597 [2024-10-01 20:30:30.691432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:35.597 [2024-10-01 20:30:30.691440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.597 [2024-10-01 20:30:30.721706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.597 [2024-10-01 20:30:30.721743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:35.597 [2024-10-01 20:30:30.721754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.240 ms 00:32:35.597 [2024-10-01 20:30:30.721762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.597 [2024-10-01 20:30:30.721828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.597 [2024-10-01 20:30:30.721837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:35.597 [2024-10-01 20:30:30.721849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:35.597 [2024-10-01 20:30:30.721856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.597 [2024-10-01 20:30:30.722172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.597 [2024-10-01 20:30:30.722194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:35.597 [2024-10-01 20:30:30.722204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:32:35.597 [2024-10-01 20:30:30.722211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.597 [2024-10-01 20:30:30.722335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.597 [2024-10-01 20:30:30.722348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:35.597 [2024-10-01 20:30:30.722358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:32:35.597 [2024-10-01 20:30:30.722367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.597 [2024-10-01 20:30:30.735942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.597 [2024-10-01 20:30:30.735975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:35.597 [2024-10-01 20:30:30.735988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.551 ms 00:32:35.597 [2024-10-01 20:30:30.735996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.597 [2024-10-01 20:30:30.748235] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:32:35.597 [2024-10-01 20:30:30.748273] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:35.597 [2024-10-01 20:30:30.748287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.597 [2024-10-01 20:30:30.748296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:35.597 [2024-10-01 20:30:30.748307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.175 ms 00:32:35.597 [2024-10-01 20:30:30.748315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.597 [2024-10-01 20:30:30.772242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.597 [2024-10-01 20:30:30.772291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:35.597 [2024-10-01 20:30:30.772303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.856 ms 00:32:35.597 [2024-10-01 20:30:30.772316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.597 [2024-10-01 20:30:30.783739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.597 [2024-10-01 20:30:30.783770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:35.597 [2024-10-01 20:30:30.783784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.349 ms 00:32:35.597 [2024-10-01 20:30:30.783791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.597 [2024-10-01 20:30:30.794836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.597 [2024-10-01 20:30:30.794868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:35.597 [2024-10-01 20:30:30.794880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.985 ms 00:32:35.597 [2024-10-01 20:30:30.794888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.597 [2024-10-01 20:30:30.795501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.597 [2024-10-01 20:30:30.795525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:35.597 [2024-10-01 20:30:30.795537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:32:35.597 [2024-10-01 20:30:30.795544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.856 [2024-10-01 20:30:30.851008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.856 [2024-10-01 20:30:30.851060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:35.856 [2024-10-01 20:30:30.851076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.438 ms 00:32:35.856 [2024-10-01 20:30:30.851084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.856 [2024-10-01 20:30:30.861362] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:35.856 [2024-10-01 20:30:30.875429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.856 [2024-10-01 20:30:30.875477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:35.856 [2024-10-01 20:30:30.875489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.248 ms 00:32:35.856 [2024-10-01 20:30:30.875499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.856 [2024-10-01 20:30:30.875583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.856 [2024-10-01 20:30:30.875596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:35.856 [2024-10-01 20:30:30.875605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:35.856 [2024-10-01 20:30:30.875616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.856 [2024-10-01 20:30:30.875661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.856 [2024-10-01 20:30:30.875671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:35.856 [2024-10-01 20:30:30.875678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:35.856 [2024-10-01 20:30:30.875687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.856 [2024-10-01 20:30:30.875729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.856 [2024-10-01 20:30:30.875743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:35.856 [2024-10-01 20:30:30.875751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:35.856 [2024-10-01 20:30:30.875762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.856 [2024-10-01 20:30:30.875793] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:35.856 [2024-10-01 20:30:30.875805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.856 [2024-10-01 20:30:30.875812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:35.856 [2024-10-01 20:30:30.875822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:35.856 [2024-10-01 20:30:30.875828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.856 [2024-10-01 20:30:30.899229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.856 [2024-10-01 20:30:30.899270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:35.856 [2024-10-01 20:30:30.899286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.375 ms 00:32:35.856 [2024-10-01 20:30:30.899295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.856 [2024-10-01 20:30:30.899389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.856 [2024-10-01 20:30:30.899399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:35.856 [2024-10-01 20:30:30.899409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:35.856 [2024-10-01 20:30:30.899416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.856 [2024-10-01 20:30:30.900246] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:35.856 [2024-10-01 20:30:30.903245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.778 ms, result 0 00:32:35.856 [2024-10-01 20:30:30.904133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:35.856 Some configs were skipped because the RPC state that can call them passed over. 00:32:35.856 20:30:30 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:32:36.117 [2024-10-01 20:30:31.086484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.117 [2024-10-01 20:30:31.086541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:32:36.117 [2024-10-01 20:30:31.086555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:32:36.117 [2024-10-01 20:30:31.086565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.117 [2024-10-01 20:30:31.086599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.479 ms, result 0 00:32:36.117 true 00:32:36.117 20:30:31 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:32:36.118 [2024-10-01 20:30:31.286333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.118 [2024-10-01 20:30:31.286383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:32:36.118 [2024-10-01 20:30:31.286397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:32:36.118 [2024-10-01 20:30:31.286405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.118 [2024-10-01 20:30:31.286440] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.084 ms, result 0 00:32:36.118 true 00:32:36.118 20:30:31 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 74766 00:32:36.118 20:30:31 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74766 ']' 00:32:36.118 20:30:31 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74766 00:32:36.118 20:30:31 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:32:36.118 20:30:31 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:36.118 20:30:31 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74766 00:32:36.118 20:30:31 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:36.118 20:30:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:36.118 killing process with pid 74766 00:32:36.118 20:30:31 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74766' 00:32:36.118 20:30:31 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74766 00:32:36.118 20:30:31 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74766 00:32:37.052 [2024-10-01 20:30:32.041035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.053 [2024-10-01 20:30:32.041092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:37.053 [2024-10-01 20:30:32.041105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:37.053 [2024-10-01 20:30:32.041115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.053 [2024-10-01 20:30:32.041137] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:37.053 [2024-10-01 20:30:32.043679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.053 [2024-10-01 20:30:32.043718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:37.053 [2024-10-01 20:30:32.043732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.526 ms 00:32:37.053 [2024-10-01 20:30:32.043740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.053 [2024-10-01 20:30:32.044038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.053 [2024-10-01 20:30:32.044062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:37.053 [2024-10-01 20:30:32.044072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:32:37.053 [2024-10-01 20:30:32.044080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.053 [2024-10-01 20:30:32.048164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.053 [2024-10-01 20:30:32.048194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:37.053 [2024-10-01 20:30:32.048204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.063 ms 00:32:37.053 [2024-10-01 20:30:32.048212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.053 [2024-10-01 20:30:32.055143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.053 [2024-10-01 20:30:32.055174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:37.053 [2024-10-01 20:30:32.055190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.897 ms 00:32:37.053 [2024-10-01 20:30:32.055199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.053 [2024-10-01 20:30:32.064585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.053 [2024-10-01 20:30:32.064616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:37.053 [2024-10-01 20:30:32.064629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.336 ms 00:32:37.053 [2024-10-01 20:30:32.064636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.053 [2024-10-01 20:30:32.072073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.053 [2024-10-01 20:30:32.072105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:37.053 [2024-10-01 20:30:32.072118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.400 ms 00:32:37.053 [2024-10-01 20:30:32.072133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.053 [2024-10-01 20:30:32.072277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.053 [2024-10-01 20:30:32.072304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:37.053 [2024-10-01 20:30:32.072316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:32:37.053 [2024-10-01 20:30:32.072325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.053 [2024-10-01 20:30:32.082125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.053 [2024-10-01 20:30:32.082156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:37.053 [2024-10-01 20:30:32.082168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.778 ms 00:32:37.053 [2024-10-01 20:30:32.082175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.053 [2024-10-01 20:30:32.090879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.053 [2024-10-01 20:30:32.090908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:37.053 [2024-10-01 20:30:32.090923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.656 ms 00:32:37.053 [2024-10-01 20:30:32.090931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.053 [2024-10-01 20:30:32.099885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.053 [2024-10-01 20:30:32.099914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:37.053 [2024-10-01 20:30:32.099925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.919 ms 00:32:37.053 [2024-10-01 20:30:32.099932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.053 [2024-10-01 20:30:32.108820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.053 [2024-10-01 20:30:32.108863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:37.053 [2024-10-01 20:30:32.108874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.828 ms 00:32:37.053 [2024-10-01 20:30:32.108881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.053 [2024-10-01 20:30:32.108914] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:37.053 [2024-10-01 20:30:32.108927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.108938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.108946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.108955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.108962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.108973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.108980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.108989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.108996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:37.053 [2024-10-01 20:30:32.109175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:37.054 [2024-10-01 20:30:32.109762] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:37.054 [2024-10-01 20:30:32.109773] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc42671f-8b04-4386-a6b4-5ad0168aad47 00:32:37.054 [2024-10-01 20:30:32.109781] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:37.054 [2024-10-01 20:30:32.109790] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:37.054 [2024-10-01 20:30:32.109797] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:37.054 [2024-10-01 20:30:32.109805] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:37.055 [2024-10-01 20:30:32.109818] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:37.055 [2024-10-01 20:30:32.109827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:37.055 [2024-10-01 20:30:32.109834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:37.055 [2024-10-01 20:30:32.109842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:37.055 [2024-10-01 20:30:32.109849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:37.055 [2024-10-01 20:30:32.109857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.055 [2024-10-01 20:30:32.109864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:37.055 [2024-10-01 20:30:32.109874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:32:37.055 [2024-10-01 20:30:32.109881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.055 [2024-10-01 20:30:32.122025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.055 [2024-10-01 20:30:32.122056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:37.055 [2024-10-01 20:30:32.122072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.123 ms 00:32:37.055 [2024-10-01 20:30:32.122080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.055 [2024-10-01 20:30:32.122444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.055 [2024-10-01 20:30:32.122465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:37.055 [2024-10-01 20:30:32.122475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:32:37.055 [2024-10-01 20:30:32.122483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.055 [2024-10-01 20:30:32.161357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.055 [2024-10-01 20:30:32.161403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:37.055 [2024-10-01 20:30:32.161419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.055 [2024-10-01 20:30:32.161428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.055 [2024-10-01 20:30:32.161536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.055 [2024-10-01 20:30:32.161546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:37.055 [2024-10-01 20:30:32.161555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.055 [2024-10-01 20:30:32.161563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.055 [2024-10-01 20:30:32.161604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.055 [2024-10-01 20:30:32.161612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:37.055 [2024-10-01 20:30:32.161623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.055 [2024-10-01 20:30:32.161633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.055 [2024-10-01 20:30:32.161652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.055 [2024-10-01 20:30:32.161660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:37.055 [2024-10-01 20:30:32.161669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.055 [2024-10-01 20:30:32.161676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.055 [2024-10-01 20:30:32.237397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.055 [2024-10-01 20:30:32.237443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:37.055 [2024-10-01 20:30:32.237459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.055 [2024-10-01 20:30:32.237466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.312 [2024-10-01 20:30:32.303134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.312 [2024-10-01 20:30:32.303182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:37.312 [2024-10-01 20:30:32.303195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.312 [2024-10-01 20:30:32.303203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.312 [2024-10-01 20:30:32.303285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.312 [2024-10-01 20:30:32.303295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:37.312 [2024-10-01 20:30:32.303307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.312 [2024-10-01 20:30:32.303314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.312 [2024-10-01 20:30:32.303344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.312 [2024-10-01 20:30:32.303352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:37.312 [2024-10-01 20:30:32.303362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.312 [2024-10-01 20:30:32.303369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.312 [2024-10-01 20:30:32.303457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.312 [2024-10-01 20:30:32.303466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:37.312 [2024-10-01 20:30:32.303475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.312 [2024-10-01 20:30:32.303483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.312 [2024-10-01 20:30:32.303514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.312 [2024-10-01 20:30:32.303523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:37.312 [2024-10-01 20:30:32.303532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.312 [2024-10-01 20:30:32.303539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.312 [2024-10-01 20:30:32.303576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.312 [2024-10-01 20:30:32.303585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:37.312 [2024-10-01 20:30:32.303597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.312 [2024-10-01 20:30:32.303604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.312 [2024-10-01 20:30:32.303649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:37.312 [2024-10-01 20:30:32.303658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:37.312 [2024-10-01 20:30:32.303667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:37.312 [2024-10-01 20:30:32.303674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.312 [2024-10-01 20:30:32.303829] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 262.775 ms, result 0 00:32:38.243 20:30:33 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:32:38.243 20:30:33 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:38.500 [2024-10-01 20:30:33.500014] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:32:38.500 [2024-10-01 20:30:33.500134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74823 ] 00:32:38.500 [2024-10-01 20:30:33.647645] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.757 [2024-10-01 20:30:33.816418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.014 [2024-10-01 20:30:34.185312] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:39.014 [2024-10-01 20:30:34.185373] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:39.272 [2024-10-01 20:30:34.334062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.272 [2024-10-01 20:30:34.334118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:39.272 [2024-10-01 20:30:34.334132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:39.272 [2024-10-01 20:30:34.334139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.272 [2024-10-01 20:30:34.336344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.272 [2024-10-01 20:30:34.336379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:39.272 [2024-10-01 20:30:34.336388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.189 ms 00:32:39.272 [2024-10-01 20:30:34.336398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.272 [2024-10-01 20:30:34.336469] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:39.272 [2024-10-01 20:30:34.337056] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:39.272 [2024-10-01 20:30:34.337081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.272 [2024-10-01 20:30:34.337090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:39.272 [2024-10-01 20:30:34.337097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:32:39.272 [2024-10-01 20:30:34.337103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.272 [2024-10-01 20:30:34.338734] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:39.272 [2024-10-01 20:30:34.349024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.272 [2024-10-01 20:30:34.349062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:39.272 [2024-10-01 20:30:34.349073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.291 ms 00:32:39.272 [2024-10-01 20:30:34.349080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.272 [2024-10-01 20:30:34.349172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.272 [2024-10-01 20:30:34.349181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:39.272 [2024-10-01 20:30:34.349191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:39.272 [2024-10-01 20:30:34.349198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.272 [2024-10-01 20:30:34.354002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.272 [2024-10-01 20:30:34.354031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:39.272 [2024-10-01 20:30:34.354040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.767 ms 00:32:39.272 [2024-10-01 20:30:34.354047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.272 [2024-10-01 20:30:34.354124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.272 [2024-10-01 20:30:34.354135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:39.272 [2024-10-01 20:30:34.354142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:39.272 [2024-10-01 20:30:34.354147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.272 [2024-10-01 20:30:34.354168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.272 [2024-10-01 20:30:34.354175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:39.272 [2024-10-01 20:30:34.354182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:39.272 [2024-10-01 20:30:34.354188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.272 [2024-10-01 20:30:34.354208] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:39.272 [2024-10-01 20:30:34.356881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.272 [2024-10-01 20:30:34.356905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:39.272 [2024-10-01 20:30:34.356913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:32:39.272 [2024-10-01 20:30:34.356919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.272 [2024-10-01 20:30:34.356948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.272 [2024-10-01 20:30:34.356957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:39.272 [2024-10-01 20:30:34.356964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:39.272 [2024-10-01 20:30:34.356970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.272 [2024-10-01 20:30:34.356984] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:39.272 [2024-10-01 20:30:34.356999] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:39.272 [2024-10-01 20:30:34.357027] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:39.272 [2024-10-01 20:30:34.357038] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:39.272 [2024-10-01 20:30:34.357134] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:39.272 [2024-10-01 20:30:34.357145] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:39.272 [2024-10-01 20:30:34.357159] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:39.272 [2024-10-01 20:30:34.357171] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:39.272 [2024-10-01 20:30:34.357179] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:39.272 [2024-10-01 20:30:34.357189] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:39.272 [2024-10-01 20:30:34.357195] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:39.272 [2024-10-01 20:30:34.357204] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:39.272 [2024-10-01 20:30:34.357212] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:39.273 [2024-10-01 20:30:34.357219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.273 [2024-10-01 20:30:34.357227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:39.273 [2024-10-01 20:30:34.357236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:32:39.273 [2024-10-01 20:30:34.357242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.273 [2024-10-01 20:30:34.357321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.273 [2024-10-01 20:30:34.357328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:39.273 [2024-10-01 20:30:34.357335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:39.273 [2024-10-01 20:30:34.357341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.273 [2024-10-01 20:30:34.357421] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:39.273 [2024-10-01 20:30:34.357436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:39.273 [2024-10-01 20:30:34.357445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:39.273 [2024-10-01 20:30:34.357452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.273 [2024-10-01 20:30:34.357458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:39.273 [2024-10-01 20:30:34.357463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:39.273 [2024-10-01 20:30:34.357469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:39.273 [2024-10-01 20:30:34.357474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:39.273 [2024-10-01 20:30:34.357480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:39.273 [2024-10-01 20:30:34.357486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:39.273 [2024-10-01 20:30:34.357491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:39.273 [2024-10-01 20:30:34.357501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:39.273 [2024-10-01 20:30:34.357506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:39.273 [2024-10-01 20:30:34.357511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:39.273 [2024-10-01 20:30:34.357517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:39.273 [2024-10-01 20:30:34.357523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.273 [2024-10-01 20:30:34.357528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:39.273 [2024-10-01 20:30:34.357534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:39.273 [2024-10-01 20:30:34.357539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.273 [2024-10-01 20:30:34.357545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:39.273 [2024-10-01 20:30:34.357550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:39.273 [2024-10-01 20:30:34.357555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:39.273 [2024-10-01 20:30:34.357560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:39.273 [2024-10-01 20:30:34.357565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:39.273 [2024-10-01 20:30:34.357570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:39.273 [2024-10-01 20:30:34.357575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:39.273 [2024-10-01 20:30:34.357581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:39.273 [2024-10-01 20:30:34.357585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:39.273 [2024-10-01 20:30:34.357591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:39.273 [2024-10-01 20:30:34.357596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:39.273 [2024-10-01 20:30:34.357601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:39.273 [2024-10-01 20:30:34.357606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:39.273 [2024-10-01 20:30:34.357611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:39.273 [2024-10-01 20:30:34.357616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:39.273 [2024-10-01 20:30:34.357621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:39.273 [2024-10-01 20:30:34.357626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:39.273 [2024-10-01 20:30:34.357632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:39.273 [2024-10-01 20:30:34.357637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:39.273 [2024-10-01 20:30:34.357643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:39.273 [2024-10-01 20:30:34.357647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.273 [2024-10-01 20:30:34.357653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:39.273 [2024-10-01 20:30:34.357658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:39.273 [2024-10-01 20:30:34.357663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.273 [2024-10-01 20:30:34.357668] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:39.273 [2024-10-01 20:30:34.357674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:39.273 [2024-10-01 20:30:34.357680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:39.273 [2024-10-01 20:30:34.357685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.273 [2024-10-01 20:30:34.357703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:39.273 [2024-10-01 20:30:34.357709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:39.273 [2024-10-01 20:30:34.357714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:39.273 [2024-10-01 20:30:34.357720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:39.273 [2024-10-01 20:30:34.357725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:39.273 [2024-10-01 20:30:34.357731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:39.273 [2024-10-01 20:30:34.357737] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:39.273 [2024-10-01 20:30:34.357748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:39.273 [2024-10-01 20:30:34.357754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:39.273 [2024-10-01 20:30:34.357760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:39.273 [2024-10-01 20:30:34.357766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:39.273 [2024-10-01 20:30:34.357773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:39.273 [2024-10-01 20:30:34.357778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:39.273 [2024-10-01 20:30:34.357784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:39.273 [2024-10-01 20:30:34.357790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:39.273 [2024-10-01 20:30:34.357796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:39.273 [2024-10-01 20:30:34.357802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:39.273 [2024-10-01 20:30:34.357808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:39.273 [2024-10-01 20:30:34.357813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:39.273 [2024-10-01 20:30:34.357819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:39.273 [2024-10-01 20:30:34.357824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:39.273 [2024-10-01 20:30:34.357830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:39.273 [2024-10-01 20:30:34.357836] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:39.273 [2024-10-01 20:30:34.357842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:39.273 [2024-10-01 20:30:34.357849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:39.273 [2024-10-01 20:30:34.357854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:39.273 [2024-10-01 20:30:34.357860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:39.273 [2024-10-01 20:30:34.357866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:39.273 [2024-10-01 20:30:34.357872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.273 [2024-10-01 20:30:34.357879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:39.273 [2024-10-01 20:30:34.357885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:32:39.273 [2024-10-01 20:30:34.357891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.273 [2024-10-01 20:30:34.379705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.273 [2024-10-01 20:30:34.379743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:39.273 [2024-10-01 20:30:34.379753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.774 ms 00:32:39.273 [2024-10-01 20:30:34.379760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.273 [2024-10-01 20:30:34.379869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.273 [2024-10-01 20:30:34.379878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:39.273 [2024-10-01 20:30:34.379884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:39.273 [2024-10-01 20:30:34.379891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.273 [2024-10-01 20:30:34.405952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.273 [2024-10-01 20:30:34.405992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:39.273 [2024-10-01 20:30:34.406002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.045 ms 00:32:39.273 [2024-10-01 20:30:34.406008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.273 [2024-10-01 20:30:34.406076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.273 [2024-10-01 20:30:34.406084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:39.273 [2024-10-01 20:30:34.406091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:39.274 [2024-10-01 20:30:34.406097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.274 [2024-10-01 20:30:34.406470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.274 [2024-10-01 20:30:34.406491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:39.274 [2024-10-01 20:30:34.406499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:32:39.274 [2024-10-01 20:30:34.406505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.274 [2024-10-01 20:30:34.406616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.274 [2024-10-01 20:30:34.406630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:39.274 [2024-10-01 20:30:34.406637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:32:39.274 [2024-10-01 20:30:34.406643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.274 [2024-10-01 20:30:34.417513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.274 [2024-10-01 20:30:34.417543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:39.274 [2024-10-01 20:30:34.417552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.852 ms 00:32:39.274 [2024-10-01 20:30:34.417559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.274 [2024-10-01 20:30:34.427558] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:32:39.274 [2024-10-01 20:30:34.427595] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:39.274 [2024-10-01 20:30:34.427605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.274 [2024-10-01 20:30:34.427612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:39.274 [2024-10-01 20:30:34.427619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.958 ms 00:32:39.274 [2024-10-01 20:30:34.427626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.274 [2024-10-01 20:30:34.447321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.274 [2024-10-01 20:30:34.447373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:39.274 [2024-10-01 20:30:34.447392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.593 ms 00:32:39.274 [2024-10-01 20:30:34.447399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.274 [2024-10-01 20:30:34.457042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.274 [2024-10-01 20:30:34.457080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:39.274 [2024-10-01 20:30:34.457089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.562 ms 00:32:39.274 [2024-10-01 20:30:34.457095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.274 [2024-10-01 20:30:34.466598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.274 [2024-10-01 20:30:34.466634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:39.274 [2024-10-01 20:30:34.466643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.447 ms 00:32:39.274 [2024-10-01 20:30:34.466649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.274 [2024-10-01 20:30:34.467181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.274 [2024-10-01 20:30:34.467204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:39.274 [2024-10-01 20:30:34.467212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:32:39.274 [2024-10-01 20:30:34.467219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.532 [2024-10-01 20:30:34.513228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.532 [2024-10-01 20:30:34.513272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:39.532 [2024-10-01 20:30:34.513285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.988 ms 00:32:39.532 [2024-10-01 20:30:34.513292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.532 [2024-10-01 20:30:34.522236] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:39.532 [2024-10-01 20:30:34.535301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.532 [2024-10-01 20:30:34.535340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:39.532 [2024-10-01 20:30:34.535351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.912 ms 00:32:39.532 [2024-10-01 20:30:34.535358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.532 [2024-10-01 20:30:34.535455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.532 [2024-10-01 20:30:34.535464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:39.532 [2024-10-01 20:30:34.535471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:39.532 [2024-10-01 20:30:34.535477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.532 [2024-10-01 20:30:34.535521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.532 [2024-10-01 20:30:34.535530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:39.532 [2024-10-01 20:30:34.535537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:32:39.532 [2024-10-01 20:30:34.535543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.532 [2024-10-01 20:30:34.535561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.532 [2024-10-01 20:30:34.535567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:39.532 [2024-10-01 20:30:34.535574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:39.532 [2024-10-01 20:30:34.535580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.532 [2024-10-01 20:30:34.535605] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:39.532 [2024-10-01 20:30:34.535614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.532 [2024-10-01 20:30:34.535623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:39.532 [2024-10-01 20:30:34.535630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:39.532 [2024-10-01 20:30:34.535636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.532 [2024-10-01 20:30:34.554525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.532 [2024-10-01 20:30:34.554561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:39.532 [2024-10-01 20:30:34.554570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.874 ms 00:32:39.532 [2024-10-01 20:30:34.554577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.532 [2024-10-01 20:30:34.554657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.532 [2024-10-01 20:30:34.554666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:39.532 [2024-10-01 20:30:34.554673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:39.532 [2024-10-01 20:30:34.554680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.532 [2024-10-01 20:30:34.555455] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:39.532 [2024-10-01 20:30:34.557953] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.159 ms, result 0 00:32:39.532 [2024-10-01 20:30:34.558389] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:39.532 [2024-10-01 20:30:34.573238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:45.265  Copying: 48/256 [MB] (48 MBps) Copying: 91/256 [MB] (42 MBps) Copying: 132/256 [MB] (41 MBps) Copying: 175/256 [MB] (42 MBps) Copying: 217/256 [MB] (41 MBps) Copying: 256/256 [MB] (average 44 MBps)[2024-10-01 20:30:40.371975] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:45.265 [2024-10-01 20:30:40.381210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.265 [2024-10-01 20:30:40.381256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:45.265 [2024-10-01 20:30:40.381270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:45.265 [2024-10-01 20:30:40.381279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.265 [2024-10-01 20:30:40.381301] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:45.265 [2024-10-01 20:30:40.383902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.265 [2024-10-01 20:30:40.383937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:45.265 [2024-10-01 20:30:40.383948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:32:45.265 [2024-10-01 20:30:40.383956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.265 [2024-10-01 20:30:40.384219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.265 [2024-10-01 20:30:40.384237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:45.265 [2024-10-01 20:30:40.384246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:32:45.265 [2024-10-01 20:30:40.384253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.265 [2024-10-01 20:30:40.387989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.265 [2024-10-01 20:30:40.388019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:45.265 [2024-10-01 20:30:40.388029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.720 ms 00:32:45.265 [2024-10-01 20:30:40.388037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.265 [2024-10-01 20:30:40.395044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.265 [2024-10-01 20:30:40.395087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:45.265 [2024-10-01 20:30:40.395104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.987 ms 00:32:45.265 [2024-10-01 20:30:40.395113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.265 [2024-10-01 20:30:40.418306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.265 [2024-10-01 20:30:40.418353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:45.265 [2024-10-01 20:30:40.418366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.133 ms 00:32:45.265 [2024-10-01 20:30:40.418374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.265 [2024-10-01 20:30:40.432279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.265 [2024-10-01 20:30:40.432334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:45.265 [2024-10-01 20:30:40.432346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.882 ms 00:32:45.265 [2024-10-01 20:30:40.432354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.265 [2024-10-01 20:30:40.432479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.265 [2024-10-01 20:30:40.432489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:45.265 [2024-10-01 20:30:40.432498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:32:45.265 [2024-10-01 20:30:40.432505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.265 [2024-10-01 20:30:40.456749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.266 [2024-10-01 20:30:40.456808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:45.266 [2024-10-01 20:30:40.456822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.219 ms 00:32:45.266 [2024-10-01 20:30:40.456830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.523 [2024-10-01 20:30:40.479770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.523 [2024-10-01 20:30:40.479824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:45.523 [2024-10-01 20:30:40.479837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.903 ms 00:32:45.523 [2024-10-01 20:30:40.479844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.523 [2024-10-01 20:30:40.503244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.523 [2024-10-01 20:30:40.503306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:45.523 [2024-10-01 20:30:40.503320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.365 ms 00:32:45.523 [2024-10-01 20:30:40.503328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.523 [2024-10-01 20:30:40.529745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.523 [2024-10-01 20:30:40.529811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:45.524 [2024-10-01 20:30:40.529824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.349 ms 00:32:45.524 [2024-10-01 20:30:40.529833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.524 [2024-10-01 20:30:40.529894] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:45.524 [2024-10-01 20:30:40.529909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.529919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.529927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.529935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.529944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.529952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.529959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.529968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.529975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.529983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.529991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.529999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:45.524 [2024-10-01 20:30:40.530558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:45.525 [2024-10-01 20:30:40.530723] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:45.525 [2024-10-01 20:30:40.530731] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc42671f-8b04-4386-a6b4-5ad0168aad47 00:32:45.525 [2024-10-01 20:30:40.530739] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:45.525 [2024-10-01 20:30:40.530747] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:45.525 [2024-10-01 20:30:40.530754] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:45.525 [2024-10-01 20:30:40.530765] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:45.525 [2024-10-01 20:30:40.530772] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:45.525 [2024-10-01 20:30:40.530780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:45.525 [2024-10-01 20:30:40.530787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:45.525 [2024-10-01 20:30:40.530794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:45.525 [2024-10-01 20:30:40.530800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:45.525 [2024-10-01 20:30:40.530807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.525 [2024-10-01 20:30:40.530815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:45.525 [2024-10-01 20:30:40.530824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:32:45.525 [2024-10-01 20:30:40.530831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.543914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.525 [2024-10-01 20:30:40.543978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:45.525 [2024-10-01 20:30:40.543991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.059 ms 00:32:45.525 [2024-10-01 20:30:40.543999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.544384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.525 [2024-10-01 20:30:40.544401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:45.525 [2024-10-01 20:30:40.544409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:32:45.525 [2024-10-01 20:30:40.544417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.575336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.525 [2024-10-01 20:30:40.575395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:45.525 [2024-10-01 20:30:40.575407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.525 [2024-10-01 20:30:40.575415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.575521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.525 [2024-10-01 20:30:40.575531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:45.525 [2024-10-01 20:30:40.575539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.525 [2024-10-01 20:30:40.575546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.575590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.525 [2024-10-01 20:30:40.575607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:45.525 [2024-10-01 20:30:40.575619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.525 [2024-10-01 20:30:40.575631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.575648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.525 [2024-10-01 20:30:40.575656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:45.525 [2024-10-01 20:30:40.575663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.525 [2024-10-01 20:30:40.575670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.656270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.525 [2024-10-01 20:30:40.656338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:45.525 [2024-10-01 20:30:40.656352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.525 [2024-10-01 20:30:40.656360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.720633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.525 [2024-10-01 20:30:40.720720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:45.525 [2024-10-01 20:30:40.720734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.525 [2024-10-01 20:30:40.720742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.720816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.525 [2024-10-01 20:30:40.720825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:45.525 [2024-10-01 20:30:40.720839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.525 [2024-10-01 20:30:40.720846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.720873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.525 [2024-10-01 20:30:40.720881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:45.525 [2024-10-01 20:30:40.720889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.525 [2024-10-01 20:30:40.720896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.720987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.525 [2024-10-01 20:30:40.720997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:45.525 [2024-10-01 20:30:40.721005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.525 [2024-10-01 20:30:40.721015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.721043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.525 [2024-10-01 20:30:40.721052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:45.525 [2024-10-01 20:30:40.721060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.525 [2024-10-01 20:30:40.721066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.721100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.525 [2024-10-01 20:30:40.721108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:45.525 [2024-10-01 20:30:40.721116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.525 [2024-10-01 20:30:40.721126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.721166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.525 [2024-10-01 20:30:40.721176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:45.525 [2024-10-01 20:30:40.721184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.525 [2024-10-01 20:30:40.721191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.525 [2024-10-01 20:30:40.721321] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.111 ms, result 0 00:32:46.894 00:32:46.894 00:32:46.894 20:30:41 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:32:46.894 20:30:41 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:32:47.456 20:30:42 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:47.456 [2024-10-01 20:30:42.591005] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:32:47.456 [2024-10-01 20:30:42.591129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74924 ] 00:32:47.730 [2024-10-01 20:30:42.741132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.730 [2024-10-01 20:30:42.942255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.294 [2024-10-01 20:30:43.380462] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:48.294 [2024-10-01 20:30:43.380530] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:48.553 [2024-10-01 20:30:43.534322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.553 [2024-10-01 20:30:43.534371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:48.553 [2024-10-01 20:30:43.534387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:48.553 [2024-10-01 20:30:43.534396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.553 [2024-10-01 20:30:43.537094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.553 [2024-10-01 20:30:43.537132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:48.553 [2024-10-01 20:30:43.537143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:32:48.553 [2024-10-01 20:30:43.537153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.553 [2024-10-01 20:30:43.537323] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:48.553 [2024-10-01 20:30:43.537993] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:48.553 [2024-10-01 20:30:43.538019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.553 [2024-10-01 20:30:43.538030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:48.553 [2024-10-01 20:30:43.538039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:32:48.553 [2024-10-01 20:30:43.538046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.553 [2024-10-01 20:30:43.539366] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:48.553 [2024-10-01 20:30:43.551745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.553 [2024-10-01 20:30:43.551808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:48.553 [2024-10-01 20:30:43.551821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.379 ms 00:32:48.553 [2024-10-01 20:30:43.551830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.553 [2024-10-01 20:30:43.551944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.553 [2024-10-01 20:30:43.551956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:48.553 [2024-10-01 20:30:43.551967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:48.553 [2024-10-01 20:30:43.551975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.553 [2024-10-01 20:30:43.557225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.553 [2024-10-01 20:30:43.557262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:48.553 [2024-10-01 20:30:43.557272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.207 ms 00:32:48.553 [2024-10-01 20:30:43.557279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.553 [2024-10-01 20:30:43.557370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.553 [2024-10-01 20:30:43.557382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:48.553 [2024-10-01 20:30:43.557390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:48.553 [2024-10-01 20:30:43.557397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.553 [2024-10-01 20:30:43.557423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.553 [2024-10-01 20:30:43.557431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:48.553 [2024-10-01 20:30:43.557438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:48.553 [2024-10-01 20:30:43.557446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.553 [2024-10-01 20:30:43.557466] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:48.553 [2024-10-01 20:30:43.560685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.553 [2024-10-01 20:30:43.560731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:48.553 [2024-10-01 20:30:43.560740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:32:48.553 [2024-10-01 20:30:43.560748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.553 [2024-10-01 20:30:43.560786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.553 [2024-10-01 20:30:43.560798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:48.553 [2024-10-01 20:30:43.560807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:48.553 [2024-10-01 20:30:43.560814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.553 [2024-10-01 20:30:43.560833] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:48.553 [2024-10-01 20:30:43.560849] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:48.553 [2024-10-01 20:30:43.560884] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:48.553 [2024-10-01 20:30:43.560899] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:48.553 [2024-10-01 20:30:43.561004] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:48.553 [2024-10-01 20:30:43.561014] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:48.553 [2024-10-01 20:30:43.561024] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:48.553 [2024-10-01 20:30:43.561034] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:48.553 [2024-10-01 20:30:43.561043] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:48.553 [2024-10-01 20:30:43.561051] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:48.553 [2024-10-01 20:30:43.561058] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:48.553 [2024-10-01 20:30:43.561065] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:48.553 [2024-10-01 20:30:43.561072] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:48.553 [2024-10-01 20:30:43.561079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.553 [2024-10-01 20:30:43.561089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:48.553 [2024-10-01 20:30:43.561096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:32:48.553 [2024-10-01 20:30:43.561103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.553 [2024-10-01 20:30:43.561190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.553 [2024-10-01 20:30:43.561198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:48.553 [2024-10-01 20:30:43.561205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:48.553 [2024-10-01 20:30:43.561211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.553 [2024-10-01 20:30:43.561314] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:48.553 [2024-10-01 20:30:43.561324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:48.553 [2024-10-01 20:30:43.561334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:48.553 [2024-10-01 20:30:43.561342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:48.553 [2024-10-01 20:30:43.561350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:48.553 [2024-10-01 20:30:43.561356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:48.553 [2024-10-01 20:30:43.561363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:48.553 [2024-10-01 20:30:43.561370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:48.553 [2024-10-01 20:30:43.561378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:48.553 [2024-10-01 20:30:43.561384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:48.553 [2024-10-01 20:30:43.561391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:48.553 [2024-10-01 20:30:43.561404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:48.553 [2024-10-01 20:30:43.561410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:48.553 [2024-10-01 20:30:43.561417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:48.553 [2024-10-01 20:30:43.561425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:48.553 [2024-10-01 20:30:43.561431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:48.553 [2024-10-01 20:30:43.561438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:48.553 [2024-10-01 20:30:43.561447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:48.553 [2024-10-01 20:30:43.561454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:48.553 [2024-10-01 20:30:43.561460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:48.553 [2024-10-01 20:30:43.561467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:48.553 [2024-10-01 20:30:43.561473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:48.553 [2024-10-01 20:30:43.561480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:48.553 [2024-10-01 20:30:43.561486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:48.553 [2024-10-01 20:30:43.561493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:48.553 [2024-10-01 20:30:43.561499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:48.553 [2024-10-01 20:30:43.561506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:48.553 [2024-10-01 20:30:43.561512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:48.553 [2024-10-01 20:30:43.561518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:48.553 [2024-10-01 20:30:43.561525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:48.553 [2024-10-01 20:30:43.561532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:48.553 [2024-10-01 20:30:43.561538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:48.553 [2024-10-01 20:30:43.561545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:48.553 [2024-10-01 20:30:43.561551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:48.553 [2024-10-01 20:30:43.561558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:48.553 [2024-10-01 20:30:43.561564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:48.553 [2024-10-01 20:30:43.561571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:48.553 [2024-10-01 20:30:43.561577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:48.553 [2024-10-01 20:30:43.561584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:48.553 [2024-10-01 20:30:43.561591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:48.553 [2024-10-01 20:30:43.561598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:48.553 [2024-10-01 20:30:43.561605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:48.553 [2024-10-01 20:30:43.561611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:48.553 [2024-10-01 20:30:43.561618] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:48.553 [2024-10-01 20:30:43.561625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:48.553 [2024-10-01 20:30:43.561632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:48.553 [2024-10-01 20:30:43.561639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:48.553 [2024-10-01 20:30:43.561646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:48.553 [2024-10-01 20:30:43.561653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:48.553 [2024-10-01 20:30:43.561660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:48.553 [2024-10-01 20:30:43.561668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:48.553 [2024-10-01 20:30:43.561674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:48.553 [2024-10-01 20:30:43.561681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:48.553 [2024-10-01 20:30:43.561688] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:48.553 [2024-10-01 20:30:43.561711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:48.553 [2024-10-01 20:30:43.561719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:48.553 [2024-10-01 20:30:43.561727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:48.553 [2024-10-01 20:30:43.561734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:48.553 [2024-10-01 20:30:43.561742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:48.553 [2024-10-01 20:30:43.561749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:48.553 [2024-10-01 20:30:43.561757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:48.553 [2024-10-01 20:30:43.561764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:48.553 [2024-10-01 20:30:43.561771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:48.553 [2024-10-01 20:30:43.561778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:48.553 [2024-10-01 20:30:43.561785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:48.554 [2024-10-01 20:30:43.561792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:48.554 [2024-10-01 20:30:43.561798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:48.554 [2024-10-01 20:30:43.561805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:48.554 [2024-10-01 20:30:43.561812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:48.554 [2024-10-01 20:30:43.561819] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:48.554 [2024-10-01 20:30:43.561827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:48.554 [2024-10-01 20:30:43.561835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:48.554 [2024-10-01 20:30:43.561842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:48.554 [2024-10-01 20:30:43.561849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:48.554 [2024-10-01 20:30:43.561856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:48.554 [2024-10-01 20:30:43.561863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.554 [2024-10-01 20:30:43.561873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:48.554 [2024-10-01 20:30:43.561880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:32:48.554 [2024-10-01 20:30:43.561887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.554 [2024-10-01 20:30:43.588220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.554 [2024-10-01 20:30:43.588264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:48.554 [2024-10-01 20:30:43.588275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.249 ms 00:32:48.554 [2024-10-01 20:30:43.588283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.554 [2024-10-01 20:30:43.588417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.554 [2024-10-01 20:30:43.588427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:48.554 [2024-10-01 20:30:43.588435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:32:48.554 [2024-10-01 20:30:43.588442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.554 [2024-10-01 20:30:43.620062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.554 [2024-10-01 20:30:43.620109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:48.554 [2024-10-01 20:30:43.620119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.598 ms 00:32:48.554 [2024-10-01 20:30:43.620127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.554 [2024-10-01 20:30:43.620203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.554 [2024-10-01 20:30:43.620213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:48.554 [2024-10-01 20:30:43.620221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:48.554 [2024-10-01 20:30:43.620228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.554 [2024-10-01 20:30:43.620541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.554 [2024-10-01 20:30:43.620555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:48.554 [2024-10-01 20:30:43.620563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:32:48.554 [2024-10-01 20:30:43.620570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.554 [2024-10-01 20:30:43.620739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.554 [2024-10-01 20:30:43.620761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:48.554 [2024-10-01 20:30:43.620769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:32:48.554 [2024-10-01 20:30:43.620776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.554 [2024-10-01 20:30:43.637854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.554 [2024-10-01 20:30:43.637907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:48.554 [2024-10-01 20:30:43.637925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.054 ms 00:32:48.554 [2024-10-01 20:30:43.637938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.554 [2024-10-01 20:30:43.657836] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:32:48.554 [2024-10-01 20:30:43.657900] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:48.554 [2024-10-01 20:30:43.657919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.554 [2024-10-01 20:30:43.657931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:48.554 [2024-10-01 20:30:43.657946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.806 ms 00:32:48.554 [2024-10-01 20:30:43.657958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.554 [2024-10-01 20:30:43.686544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.554 [2024-10-01 20:30:43.686591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:48.554 [2024-10-01 20:30:43.686609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.464 ms 00:32:48.554 [2024-10-01 20:30:43.686617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.554 [2024-10-01 20:30:43.698199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.554 [2024-10-01 20:30:43.698240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:48.554 [2024-10-01 20:30:43.698250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.479 ms 00:32:48.554 [2024-10-01 20:30:43.698257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.554 [2024-10-01 20:30:43.709236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.554 [2024-10-01 20:30:43.709270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:48.554 [2024-10-01 20:30:43.709281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.911 ms 00:32:48.554 [2024-10-01 20:30:43.709289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.554 [2024-10-01 20:30:43.709920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.554 [2024-10-01 20:30:43.709946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:48.554 [2024-10-01 20:30:43.709955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:32:48.554 [2024-10-01 20:30:43.709962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.811 [2024-10-01 20:30:43.765439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.811 [2024-10-01 20:30:43.765498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:48.811 [2024-10-01 20:30:43.765511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.454 ms 00:32:48.811 [2024-10-01 20:30:43.765520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.811 [2024-10-01 20:30:43.776316] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:48.811 [2024-10-01 20:30:43.790838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.811 [2024-10-01 20:30:43.790882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:48.811 [2024-10-01 20:30:43.790895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.194 ms 00:32:48.811 [2024-10-01 20:30:43.790902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.811 [2024-10-01 20:30:43.790992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.811 [2024-10-01 20:30:43.791003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:48.811 [2024-10-01 20:30:43.791012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:48.811 [2024-10-01 20:30:43.791022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.811 [2024-10-01 20:30:43.791072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.811 [2024-10-01 20:30:43.791081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:48.811 [2024-10-01 20:30:43.791089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:48.811 [2024-10-01 20:30:43.791096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.811 [2024-10-01 20:30:43.791117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.811 [2024-10-01 20:30:43.791124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:48.811 [2024-10-01 20:30:43.791132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:48.811 [2024-10-01 20:30:43.791139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.811 [2024-10-01 20:30:43.791169] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:48.812 [2024-10-01 20:30:43.791178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.812 [2024-10-01 20:30:43.791188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:48.812 [2024-10-01 20:30:43.791195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:48.812 [2024-10-01 20:30:43.791202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.812 [2024-10-01 20:30:43.814620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.812 [2024-10-01 20:30:43.814664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:48.812 [2024-10-01 20:30:43.814676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.398 ms 00:32:48.812 [2024-10-01 20:30:43.814684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.812 [2024-10-01 20:30:43.814791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.812 [2024-10-01 20:30:43.814802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:48.812 [2024-10-01 20:30:43.814811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:48.812 [2024-10-01 20:30:43.814819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.812 [2024-10-01 20:30:43.815628] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:48.812 [2024-10-01 20:30:43.818631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 281.032 ms, result 0 00:32:48.812 [2024-10-01 20:30:43.819233] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:48.812 [2024-10-01 20:30:43.832184] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:48.812  Copying: 4096/4096 [kB] (average 39 MBps)[2024-10-01 20:30:43.936170] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:48.812 [2024-10-01 20:30:43.945381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.812 [2024-10-01 20:30:43.945430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:48.812 [2024-10-01 20:30:43.945444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:48.812 [2024-10-01 20:30:43.945453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.812 [2024-10-01 20:30:43.945476] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:48.812 [2024-10-01 20:30:43.947996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.812 [2024-10-01 20:30:43.948028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:48.812 [2024-10-01 20:30:43.948039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.505 ms 00:32:48.812 [2024-10-01 20:30:43.948047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.812 [2024-10-01 20:30:43.949770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.812 [2024-10-01 20:30:43.949804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:48.812 [2024-10-01 20:30:43.949814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.688 ms 00:32:48.812 [2024-10-01 20:30:43.949822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.812 [2024-10-01 20:30:43.953618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.812 [2024-10-01 20:30:43.953647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:48.812 [2024-10-01 20:30:43.953657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.780 ms 00:32:48.812 [2024-10-01 20:30:43.953666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.812 [2024-10-01 20:30:43.960544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.812 [2024-10-01 20:30:43.960583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:48.812 [2024-10-01 20:30:43.960592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.844 ms 00:32:48.812 [2024-10-01 20:30:43.960601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.812 [2024-10-01 20:30:43.984136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.812 [2024-10-01 20:30:43.984183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:48.812 [2024-10-01 20:30:43.984195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.456 ms 00:32:48.812 [2024-10-01 20:30:43.984203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.812 [2024-10-01 20:30:43.997955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.812 [2024-10-01 20:30:43.997997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:48.812 [2024-10-01 20:30:43.998011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.708 ms 00:32:48.812 [2024-10-01 20:30:43.998020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.812 [2024-10-01 20:30:43.998167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.812 [2024-10-01 20:30:43.998183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:48.812 [2024-10-01 20:30:43.998192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:32:48.812 [2024-10-01 20:30:43.998205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.812 [2024-10-01 20:30:44.021005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.812 [2024-10-01 20:30:44.021056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:48.812 [2024-10-01 20:30:44.021068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.782 ms 00:32:48.812 [2024-10-01 20:30:44.021075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.070 [2024-10-01 20:30:44.043460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.070 [2024-10-01 20:30:44.043499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:49.070 [2024-10-01 20:30:44.043510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.341 ms 00:32:49.070 [2024-10-01 20:30:44.043518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.070 [2024-10-01 20:30:44.065250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.070 [2024-10-01 20:30:44.065289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:49.070 [2024-10-01 20:30:44.065301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.694 ms 00:32:49.070 [2024-10-01 20:30:44.065308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.070 [2024-10-01 20:30:44.087597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.070 [2024-10-01 20:30:44.087634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:49.070 [2024-10-01 20:30:44.087644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.224 ms 00:32:49.070 [2024-10-01 20:30:44.087651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.070 [2024-10-01 20:30:44.087685] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:49.070 [2024-10-01 20:30:44.087708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.087995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:49.070 [2024-10-01 20:30:44.088117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:49.071 [2024-10-01 20:30:44.088465] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:49.071 [2024-10-01 20:30:44.088473] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc42671f-8b04-4386-a6b4-5ad0168aad47 00:32:49.071 [2024-10-01 20:30:44.088483] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:49.071 [2024-10-01 20:30:44.088493] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:49.071 [2024-10-01 20:30:44.088500] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:49.071 [2024-10-01 20:30:44.088508] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:49.071 [2024-10-01 20:30:44.088514] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:49.071 [2024-10-01 20:30:44.088522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:49.071 [2024-10-01 20:30:44.088529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:49.071 [2024-10-01 20:30:44.088535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:49.071 [2024-10-01 20:30:44.088541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:49.071 [2024-10-01 20:30:44.088549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.071 [2024-10-01 20:30:44.088556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:49.071 [2024-10-01 20:30:44.088565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:32:49.071 [2024-10-01 20:30:44.088571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.071 [2024-10-01 20:30:44.101111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.071 [2024-10-01 20:30:44.101144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:49.071 [2024-10-01 20:30:44.101154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.523 ms 00:32:49.071 [2024-10-01 20:30:44.101162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.071 [2024-10-01 20:30:44.101514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.071 [2024-10-01 20:30:44.101530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:49.071 [2024-10-01 20:30:44.101539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:32:49.071 [2024-10-01 20:30:44.101546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.071 [2024-10-01 20:30:44.131870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.071 [2024-10-01 20:30:44.131911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:49.071 [2024-10-01 20:30:44.131921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.071 [2024-10-01 20:30:44.131928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.071 [2024-10-01 20:30:44.132000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.071 [2024-10-01 20:30:44.132008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:49.071 [2024-10-01 20:30:44.132016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.071 [2024-10-01 20:30:44.132023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.071 [2024-10-01 20:30:44.132066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.071 [2024-10-01 20:30:44.132075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:49.071 [2024-10-01 20:30:44.132084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.071 [2024-10-01 20:30:44.132091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.071 [2024-10-01 20:30:44.132109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.071 [2024-10-01 20:30:44.132116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:49.071 [2024-10-01 20:30:44.132124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.071 [2024-10-01 20:30:44.132131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.071 [2024-10-01 20:30:44.210890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.071 [2024-10-01 20:30:44.210937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:49.071 [2024-10-01 20:30:44.210949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.071 [2024-10-01 20:30:44.210957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.071 [2024-10-01 20:30:44.274140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.071 [2024-10-01 20:30:44.274189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:49.071 [2024-10-01 20:30:44.274200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.071 [2024-10-01 20:30:44.274208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.071 [2024-10-01 20:30:44.274257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.071 [2024-10-01 20:30:44.274270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:49.071 [2024-10-01 20:30:44.274277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.071 [2024-10-01 20:30:44.274284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.071 [2024-10-01 20:30:44.274312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.071 [2024-10-01 20:30:44.274320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:49.071 [2024-10-01 20:30:44.274328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.071 [2024-10-01 20:30:44.274335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.071 [2024-10-01 20:30:44.274423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.071 [2024-10-01 20:30:44.274432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:49.071 [2024-10-01 20:30:44.274442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.072 [2024-10-01 20:30:44.274449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.072 [2024-10-01 20:30:44.274478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.072 [2024-10-01 20:30:44.274486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:49.072 [2024-10-01 20:30:44.274493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.072 [2024-10-01 20:30:44.274501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.072 [2024-10-01 20:30:44.274535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.072 [2024-10-01 20:30:44.274543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:49.072 [2024-10-01 20:30:44.274553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.072 [2024-10-01 20:30:44.274560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.072 [2024-10-01 20:30:44.274600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.072 [2024-10-01 20:30:44.274609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:49.072 [2024-10-01 20:30:44.274617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.072 [2024-10-01 20:30:44.274624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.072 [2024-10-01 20:30:44.274775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.392 ms, result 0 00:32:50.443 00:32:50.443 00:32:50.443 20:30:45 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74955 00:32:50.443 20:30:45 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74955 00:32:50.443 20:30:45 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:32:50.443 20:30:45 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74955 ']' 00:32:50.443 20:30:45 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.443 20:30:45 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:50.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.443 20:30:45 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.443 20:30:45 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:50.443 20:30:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:32:50.443 [2024-10-01 20:30:45.572970] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:32:50.443 [2024-10-01 20:30:45.573097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74955 ] 00:32:50.771 [2024-10-01 20:30:45.721436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.771 [2024-10-01 20:30:45.928048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.704 20:30:46 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:51.704 20:30:46 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:32:51.704 20:30:46 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:32:51.962 [2024-10-01 20:30:46.930634] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:51.962 [2024-10-01 20:30:46.930715] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:51.962 [2024-10-01 20:30:47.101182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.962 [2024-10-01 20:30:47.101238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:51.962 [2024-10-01 20:30:47.101256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:51.962 [2024-10-01 20:30:47.101264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.962 [2024-10-01 20:30:47.103926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.962 [2024-10-01 20:30:47.103964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:51.962 [2024-10-01 20:30:47.103976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.644 ms 00:32:51.962 [2024-10-01 20:30:47.103984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.962 [2024-10-01 20:30:47.104099] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:51.962 [2024-10-01 20:30:47.104836] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:51.962 [2024-10-01 20:30:47.104865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.962 [2024-10-01 20:30:47.104873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:51.962 [2024-10-01 20:30:47.104883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:32:51.962 [2024-10-01 20:30:47.104891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.962 [2024-10-01 20:30:47.106347] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:51.962 [2024-10-01 20:30:47.118874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.962 [2024-10-01 20:30:47.118920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:51.962 [2024-10-01 20:30:47.118939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.529 ms 00:32:51.962 [2024-10-01 20:30:47.118948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.962 [2024-10-01 20:30:47.119058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.962 [2024-10-01 20:30:47.119073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:51.962 [2024-10-01 20:30:47.119085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:51.962 [2024-10-01 20:30:47.119093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.962 [2024-10-01 20:30:47.125502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.962 [2024-10-01 20:30:47.125542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:51.962 [2024-10-01 20:30:47.125552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.355 ms 00:32:51.962 [2024-10-01 20:30:47.125563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.962 [2024-10-01 20:30:47.125666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.962 [2024-10-01 20:30:47.125677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:51.962 [2024-10-01 20:30:47.125686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:51.962 [2024-10-01 20:30:47.125706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.962 [2024-10-01 20:30:47.125733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.962 [2024-10-01 20:30:47.125745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:51.962 [2024-10-01 20:30:47.125753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:51.962 [2024-10-01 20:30:47.125761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.962 [2024-10-01 20:30:47.125786] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:51.962 [2024-10-01 20:30:47.128914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.962 [2024-10-01 20:30:47.128942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:51.962 [2024-10-01 20:30:47.128955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.131 ms 00:32:51.962 [2024-10-01 20:30:47.128963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.962 [2024-10-01 20:30:47.128998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.962 [2024-10-01 20:30:47.129006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:51.962 [2024-10-01 20:30:47.129016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:51.962 [2024-10-01 20:30:47.129023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.962 [2024-10-01 20:30:47.129051] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:51.962 [2024-10-01 20:30:47.129067] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:51.962 [2024-10-01 20:30:47.129108] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:51.962 [2024-10-01 20:30:47.129125] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:51.962 [2024-10-01 20:30:47.129232] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:51.962 [2024-10-01 20:30:47.129249] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:51.962 [2024-10-01 20:30:47.129262] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:51.962 [2024-10-01 20:30:47.129272] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:51.962 [2024-10-01 20:30:47.129283] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:51.962 [2024-10-01 20:30:47.129291] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:51.962 [2024-10-01 20:30:47.129299] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:51.962 [2024-10-01 20:30:47.129307] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:51.962 [2024-10-01 20:30:47.129318] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:51.962 [2024-10-01 20:30:47.129325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.962 [2024-10-01 20:30:47.129334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:51.962 [2024-10-01 20:30:47.129341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:32:51.962 [2024-10-01 20:30:47.129350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.962 [2024-10-01 20:30:47.129441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.962 [2024-10-01 20:30:47.129457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:51.962 [2024-10-01 20:30:47.129464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:32:51.962 [2024-10-01 20:30:47.129473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.962 [2024-10-01 20:30:47.129576] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:51.962 [2024-10-01 20:30:47.129594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:51.962 [2024-10-01 20:30:47.129602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:51.962 [2024-10-01 20:30:47.129612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.962 [2024-10-01 20:30:47.129619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:51.962 [2024-10-01 20:30:47.129627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:51.962 [2024-10-01 20:30:47.129634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:51.962 [2024-10-01 20:30:47.129645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:51.962 [2024-10-01 20:30:47.129652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:51.963 [2024-10-01 20:30:47.129660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:51.963 [2024-10-01 20:30:47.129667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:51.963 [2024-10-01 20:30:47.129674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:51.963 [2024-10-01 20:30:47.129681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:51.963 [2024-10-01 20:30:47.129700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:51.963 [2024-10-01 20:30:47.129707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:51.963 [2024-10-01 20:30:47.129715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.963 [2024-10-01 20:30:47.129722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:51.963 [2024-10-01 20:30:47.129730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:51.963 [2024-10-01 20:30:47.129741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.963 [2024-10-01 20:30:47.129749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:51.963 [2024-10-01 20:30:47.129757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:51.963 [2024-10-01 20:30:47.129766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:51.963 [2024-10-01 20:30:47.129773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:51.963 [2024-10-01 20:30:47.129783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:51.963 [2024-10-01 20:30:47.129790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:51.963 [2024-10-01 20:30:47.129798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:51.963 [2024-10-01 20:30:47.129805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:51.963 [2024-10-01 20:30:47.129813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:51.963 [2024-10-01 20:30:47.129820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:51.963 [2024-10-01 20:30:47.129829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:51.963 [2024-10-01 20:30:47.129835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:51.963 [2024-10-01 20:30:47.129844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:51.963 [2024-10-01 20:30:47.129851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:51.963 [2024-10-01 20:30:47.129858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:51.963 [2024-10-01 20:30:47.129865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:51.963 [2024-10-01 20:30:47.129873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:51.963 [2024-10-01 20:30:47.129879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:51.963 [2024-10-01 20:30:47.129887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:51.963 [2024-10-01 20:30:47.129893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:51.963 [2024-10-01 20:30:47.129902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.963 [2024-10-01 20:30:47.129909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:51.963 [2024-10-01 20:30:47.129924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:51.963 [2024-10-01 20:30:47.129931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.963 [2024-10-01 20:30:47.129939] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:51.963 [2024-10-01 20:30:47.129946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:51.963 [2024-10-01 20:30:47.129955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:51.963 [2024-10-01 20:30:47.129962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.963 [2024-10-01 20:30:47.129970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:51.963 [2024-10-01 20:30:47.129977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:51.963 [2024-10-01 20:30:47.129985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:51.963 [2024-10-01 20:30:47.129991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:51.963 [2024-10-01 20:30:47.129999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:51.963 [2024-10-01 20:30:47.130005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:51.963 [2024-10-01 20:30:47.130014] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:51.963 [2024-10-01 20:30:47.130023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:51.963 [2024-10-01 20:30:47.130038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:51.963 [2024-10-01 20:30:47.130045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:51.963 [2024-10-01 20:30:47.130053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:51.963 [2024-10-01 20:30:47.130061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:51.963 [2024-10-01 20:30:47.130070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:51.963 [2024-10-01 20:30:47.130076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:51.963 [2024-10-01 20:30:47.130085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:51.963 [2024-10-01 20:30:47.130092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:51.963 [2024-10-01 20:30:47.130100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:51.963 [2024-10-01 20:30:47.130107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:51.963 [2024-10-01 20:30:47.130116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:51.963 [2024-10-01 20:30:47.130123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:51.963 [2024-10-01 20:30:47.130132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:51.963 [2024-10-01 20:30:47.130140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:51.963 [2024-10-01 20:30:47.130148] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:51.963 [2024-10-01 20:30:47.130156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:51.963 [2024-10-01 20:30:47.130166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:51.963 [2024-10-01 20:30:47.130174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:51.963 [2024-10-01 20:30:47.130182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:51.963 [2024-10-01 20:30:47.130189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:51.963 [2024-10-01 20:30:47.130198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.963 [2024-10-01 20:30:47.130205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:51.963 [2024-10-01 20:30:47.130213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:32:51.963 [2024-10-01 20:30:47.130220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.963 [2024-10-01 20:30:47.156498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.963 [2024-10-01 20:30:47.156542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:51.963 [2024-10-01 20:30:47.156555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.202 ms 00:32:51.963 [2024-10-01 20:30:47.156563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.963 [2024-10-01 20:30:47.156727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.963 [2024-10-01 20:30:47.156738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:51.963 [2024-10-01 20:30:47.156748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:32:51.963 [2024-10-01 20:30:47.156756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.187195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.187239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:52.222 [2024-10-01 20:30:47.187253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.411 ms 00:32:52.222 [2024-10-01 20:30:47.187261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.187334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.187344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:52.222 [2024-10-01 20:30:47.187356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:52.222 [2024-10-01 20:30:47.187363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.187702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.187728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:52.222 [2024-10-01 20:30:47.187738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:32:52.222 [2024-10-01 20:30:47.187746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.187865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.187882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:52.222 [2024-10-01 20:30:47.187892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:32:52.222 [2024-10-01 20:30:47.187901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.201533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.201567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:52.222 [2024-10-01 20:30:47.201581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.609 ms 00:32:52.222 [2024-10-01 20:30:47.201589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.213766] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:52.222 [2024-10-01 20:30:47.213801] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:52.222 [2024-10-01 20:30:47.213815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.213824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:52.222 [2024-10-01 20:30:47.213835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.086 ms 00:32:52.222 [2024-10-01 20:30:47.213842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.238058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.238103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:52.222 [2024-10-01 20:30:47.238116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.135 ms 00:32:52.222 [2024-10-01 20:30:47.238129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.250075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.250115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:52.222 [2024-10-01 20:30:47.250130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.844 ms 00:32:52.222 [2024-10-01 20:30:47.250137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.261376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.261411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:52.222 [2024-10-01 20:30:47.261424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.163 ms 00:32:52.222 [2024-10-01 20:30:47.261431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.262075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.262100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:52.222 [2024-10-01 20:30:47.262112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:32:52.222 [2024-10-01 20:30:47.262120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.317944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.318000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:52.222 [2024-10-01 20:30:47.318017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.799 ms 00:32:52.222 [2024-10-01 20:30:47.318026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.328770] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:52.222 [2024-10-01 20:30:47.344014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.344067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:52.222 [2024-10-01 20:30:47.344080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.876 ms 00:32:52.222 [2024-10-01 20:30:47.344090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.344174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.344185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:52.222 [2024-10-01 20:30:47.344194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:52.222 [2024-10-01 20:30:47.344206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.344252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.344262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:52.222 [2024-10-01 20:30:47.344270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:52.222 [2024-10-01 20:30:47.344279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.344300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.344312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:52.222 [2024-10-01 20:30:47.344320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:52.222 [2024-10-01 20:30:47.344336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.344391] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:52.222 [2024-10-01 20:30:47.344406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.344413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:52.222 [2024-10-01 20:30:47.344423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:52.222 [2024-10-01 20:30:47.344430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.367337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.367377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:52.222 [2024-10-01 20:30:47.367393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.882 ms 00:32:52.222 [2024-10-01 20:30:47.367401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.222 [2024-10-01 20:30:47.367491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.222 [2024-10-01 20:30:47.367501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:52.222 [2024-10-01 20:30:47.367511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:52.223 [2024-10-01 20:30:47.367519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.223 [2024-10-01 20:30:47.368436] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:52.223 [2024-10-01 20:30:47.371550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 266.976 ms, result 0 00:32:52.223 [2024-10-01 20:30:47.372468] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:52.223 Some configs were skipped because the RPC state that can call them passed over. 00:32:52.223 20:30:47 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:32:52.480 [2024-10-01 20:30:47.603016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.480 [2024-10-01 20:30:47.603080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:32:52.480 [2024-10-01 20:30:47.603095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:32:52.480 [2024-10-01 20:30:47.603104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.480 [2024-10-01 20:30:47.603139] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.602 ms, result 0 00:32:52.480 true 00:32:52.480 20:30:47 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:32:52.737 [2024-10-01 20:30:47.766669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.737 [2024-10-01 20:30:47.766734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:32:52.737 [2024-10-01 20:30:47.766750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:32:52.737 [2024-10-01 20:30:47.766758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.737 [2024-10-01 20:30:47.766795] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.053 ms, result 0 00:32:52.737 true 00:32:52.737 20:30:47 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74955 00:32:52.737 20:30:47 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74955 ']' 00:32:52.738 20:30:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74955 00:32:52.738 20:30:47 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:32:52.738 20:30:47 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:52.738 20:30:47 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74955 00:32:52.738 killing process with pid 74955 00:32:52.738 20:30:47 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:52.738 20:30:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:52.738 20:30:47 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74955' 00:32:52.738 20:30:47 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74955 00:32:52.738 20:30:47 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74955 00:32:53.671 [2024-10-01 20:30:48.604058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.671 [2024-10-01 20:30:48.604124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:53.671 [2024-10-01 20:30:48.604137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:53.671 [2024-10-01 20:30:48.604146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.671 [2024-10-01 20:30:48.604169] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:53.671 [2024-10-01 20:30:48.606743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.671 [2024-10-01 20:30:48.606778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:53.671 [2024-10-01 20:30:48.606792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.557 ms 00:32:53.671 [2024-10-01 20:30:48.606801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.671 [2024-10-01 20:30:48.607093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.671 [2024-10-01 20:30:48.607116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:53.671 [2024-10-01 20:30:48.607127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:32:53.671 [2024-10-01 20:30:48.607134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.671 [2024-10-01 20:30:48.610282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.671 [2024-10-01 20:30:48.610311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:53.671 [2024-10-01 20:30:48.610320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.128 ms 00:32:53.671 [2024-10-01 20:30:48.610326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.671 [2024-10-01 20:30:48.615721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.671 [2024-10-01 20:30:48.615750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:53.671 [2024-10-01 20:30:48.615764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.365 ms 00:32:53.671 [2024-10-01 20:30:48.615771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.671 [2024-10-01 20:30:48.623741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.671 [2024-10-01 20:30:48.623776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:53.671 [2024-10-01 20:30:48.623788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.919 ms 00:32:53.671 [2024-10-01 20:30:48.623795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.671 [2024-10-01 20:30:48.630213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.671 [2024-10-01 20:30:48.630249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:53.671 [2024-10-01 20:30:48.630259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.381 ms 00:32:53.671 [2024-10-01 20:30:48.630272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.671 [2024-10-01 20:30:48.630387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.671 [2024-10-01 20:30:48.630395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:53.671 [2024-10-01 20:30:48.630406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:32:53.671 [2024-10-01 20:30:48.630414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.671 [2024-10-01 20:30:48.638321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.671 [2024-10-01 20:30:48.638353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:53.671 [2024-10-01 20:30:48.638362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.889 ms 00:32:53.671 [2024-10-01 20:30:48.638367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.671 [2024-10-01 20:30:48.645705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.671 [2024-10-01 20:30:48.645744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:53.671 [2024-10-01 20:30:48.645760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.305 ms 00:32:53.671 [2024-10-01 20:30:48.645766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.671 [2024-10-01 20:30:48.652521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.671 [2024-10-01 20:30:48.652548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:53.671 [2024-10-01 20:30:48.652557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.716 ms 00:32:53.672 [2024-10-01 20:30:48.652563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.672 [2024-10-01 20:30:48.659430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.672 [2024-10-01 20:30:48.659457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:53.672 [2024-10-01 20:30:48.659467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.807 ms 00:32:53.672 [2024-10-01 20:30:48.659473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.672 [2024-10-01 20:30:48.659503] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:53.672 [2024-10-01 20:30:48.659516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.659997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:53.672 [2024-10-01 20:30:48.660103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:53.673 [2024-10-01 20:30:48.660207] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:53.673 [2024-10-01 20:30:48.660216] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc42671f-8b04-4386-a6b4-5ad0168aad47 00:32:53.673 [2024-10-01 20:30:48.660222] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:53.673 [2024-10-01 20:30:48.660229] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:53.673 [2024-10-01 20:30:48.660235] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:53.673 [2024-10-01 20:30:48.660242] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:53.673 [2024-10-01 20:30:48.660255] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:53.673 [2024-10-01 20:30:48.660263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:53.673 [2024-10-01 20:30:48.660268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:53.673 [2024-10-01 20:30:48.660275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:53.673 [2024-10-01 20:30:48.660280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:53.673 [2024-10-01 20:30:48.660287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.673 [2024-10-01 20:30:48.660292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:53.673 [2024-10-01 20:30:48.660300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:32:53.673 [2024-10-01 20:30:48.660306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.670254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.673 [2024-10-01 20:30:48.670282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:53.673 [2024-10-01 20:30:48.670296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.930 ms 00:32:53.673 [2024-10-01 20:30:48.670302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.670599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.673 [2024-10-01 20:30:48.670614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:53.673 [2024-10-01 20:30:48.670622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:32:53.673 [2024-10-01 20:30:48.670628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.702075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.673 [2024-10-01 20:30:48.702117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:53.673 [2024-10-01 20:30:48.702129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.673 [2024-10-01 20:30:48.702136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.703197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.673 [2024-10-01 20:30:48.703222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:53.673 [2024-10-01 20:30:48.703231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.673 [2024-10-01 20:30:48.703237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.703280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.673 [2024-10-01 20:30:48.703287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:53.673 [2024-10-01 20:30:48.703297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.673 [2024-10-01 20:30:48.703305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.703320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.673 [2024-10-01 20:30:48.703327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:53.673 [2024-10-01 20:30:48.703335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.673 [2024-10-01 20:30:48.703340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.763941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.673 [2024-10-01 20:30:48.763993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:53.673 [2024-10-01 20:30:48.764007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.673 [2024-10-01 20:30:48.764014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.814968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.673 [2024-10-01 20:30:48.815016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:53.673 [2024-10-01 20:30:48.815026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.673 [2024-10-01 20:30:48.815033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.815104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.673 [2024-10-01 20:30:48.815112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:53.673 [2024-10-01 20:30:48.815122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.673 [2024-10-01 20:30:48.815128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.815154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.673 [2024-10-01 20:30:48.815160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:53.673 [2024-10-01 20:30:48.815168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.673 [2024-10-01 20:30:48.815174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.815250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.673 [2024-10-01 20:30:48.815258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:53.673 [2024-10-01 20:30:48.815265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.673 [2024-10-01 20:30:48.815271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.815298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.673 [2024-10-01 20:30:48.815305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:53.673 [2024-10-01 20:30:48.815313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.673 [2024-10-01 20:30:48.815320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.815351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.673 [2024-10-01 20:30:48.815358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:53.673 [2024-10-01 20:30:48.815367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.673 [2024-10-01 20:30:48.815373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.815410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.673 [2024-10-01 20:30:48.815419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:53.673 [2024-10-01 20:30:48.815426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.673 [2024-10-01 20:30:48.815432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.673 [2024-10-01 20:30:48.815549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 211.468 ms, result 0 00:32:54.606 20:30:49 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:54.864 [2024-10-01 20:30:49.829625] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:32:54.864 [2024-10-01 20:30:49.829755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75013 ] 00:32:54.864 [2024-10-01 20:30:49.975873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.122 [2024-10-01 20:30:50.135033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.380 [2024-10-01 20:30:50.507976] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:55.380 [2024-10-01 20:30:50.508036] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:55.639 [2024-10-01 20:30:50.659472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.639 [2024-10-01 20:30:50.659520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:55.639 [2024-10-01 20:30:50.659533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:55.639 [2024-10-01 20:30:50.659540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.639 [2024-10-01 20:30:50.661748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.639 [2024-10-01 20:30:50.661780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:55.639 [2024-10-01 20:30:50.661788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.193 ms 00:32:55.639 [2024-10-01 20:30:50.661796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.639 [2024-10-01 20:30:50.661855] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:55.639 [2024-10-01 20:30:50.662418] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:55.639 [2024-10-01 20:30:50.662436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.639 [2024-10-01 20:30:50.662444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:55.639 [2024-10-01 20:30:50.662451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:32:55.639 [2024-10-01 20:30:50.662457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.639 [2024-10-01 20:30:50.663572] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:55.639 [2024-10-01 20:30:50.673258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.639 [2024-10-01 20:30:50.673285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:55.639 [2024-10-01 20:30:50.673294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.688 ms 00:32:55.639 [2024-10-01 20:30:50.673300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.639 [2024-10-01 20:30:50.673373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.639 [2024-10-01 20:30:50.673382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:55.639 [2024-10-01 20:30:50.673392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:32:55.639 [2024-10-01 20:30:50.673397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.639 [2024-10-01 20:30:50.678171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.639 [2024-10-01 20:30:50.678194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:55.639 [2024-10-01 20:30:50.678202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.742 ms 00:32:55.639 [2024-10-01 20:30:50.678208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.639 [2024-10-01 20:30:50.678279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.639 [2024-10-01 20:30:50.678289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:55.639 [2024-10-01 20:30:50.678296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:32:55.639 [2024-10-01 20:30:50.678302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.639 [2024-10-01 20:30:50.678321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.639 [2024-10-01 20:30:50.678327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:55.639 [2024-10-01 20:30:50.678334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:55.639 [2024-10-01 20:30:50.678340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.639 [2024-10-01 20:30:50.678358] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:55.639 [2024-10-01 20:30:50.681360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.639 [2024-10-01 20:30:50.681381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:55.639 [2024-10-01 20:30:50.681393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:32:55.639 [2024-10-01 20:30:50.681399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.639 [2024-10-01 20:30:50.681428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.639 [2024-10-01 20:30:50.681438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:55.639 [2024-10-01 20:30:50.681444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:55.639 [2024-10-01 20:30:50.681450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.639 [2024-10-01 20:30:50.681464] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:55.639 [2024-10-01 20:30:50.681479] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:55.639 [2024-10-01 20:30:50.681506] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:55.639 [2024-10-01 20:30:50.681518] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:55.639 [2024-10-01 20:30:50.681601] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:55.639 [2024-10-01 20:30:50.681614] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:55.639 [2024-10-01 20:30:50.681622] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:55.639 [2024-10-01 20:30:50.681630] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:55.639 [2024-10-01 20:30:50.681638] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:55.639 [2024-10-01 20:30:50.681644] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:55.639 [2024-10-01 20:30:50.681650] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:55.639 [2024-10-01 20:30:50.681657] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:55.639 [2024-10-01 20:30:50.681662] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:55.639 [2024-10-01 20:30:50.681668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.639 [2024-10-01 20:30:50.681676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:55.639 [2024-10-01 20:30:50.681682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:32:55.639 [2024-10-01 20:30:50.681688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.639 [2024-10-01 20:30:50.681768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.639 [2024-10-01 20:30:50.681775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:55.639 [2024-10-01 20:30:50.681781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:55.639 [2024-10-01 20:30:50.681787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.639 [2024-10-01 20:30:50.681870] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:55.639 [2024-10-01 20:30:50.681878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:55.639 [2024-10-01 20:30:50.681886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:55.639 [2024-10-01 20:30:50.681893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:55.639 [2024-10-01 20:30:50.681899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:55.639 [2024-10-01 20:30:50.681904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:55.639 [2024-10-01 20:30:50.681910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:55.639 [2024-10-01 20:30:50.681915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:55.639 [2024-10-01 20:30:50.681920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:55.639 [2024-10-01 20:30:50.681925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:55.639 [2024-10-01 20:30:50.681932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:55.639 [2024-10-01 20:30:50.681943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:55.639 [2024-10-01 20:30:50.681948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:55.639 [2024-10-01 20:30:50.681953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:55.639 [2024-10-01 20:30:50.681958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:55.639 [2024-10-01 20:30:50.681963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:55.639 [2024-10-01 20:30:50.681969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:55.639 [2024-10-01 20:30:50.681974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:55.640 [2024-10-01 20:30:50.681980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:55.640 [2024-10-01 20:30:50.681986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:55.640 [2024-10-01 20:30:50.681991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:55.640 [2024-10-01 20:30:50.681996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:55.640 [2024-10-01 20:30:50.682001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:55.640 [2024-10-01 20:30:50.682006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:55.640 [2024-10-01 20:30:50.682011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:55.640 [2024-10-01 20:30:50.682016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:55.640 [2024-10-01 20:30:50.682021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:55.640 [2024-10-01 20:30:50.682026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:55.640 [2024-10-01 20:30:50.682031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:55.640 [2024-10-01 20:30:50.682036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:55.640 [2024-10-01 20:30:50.682041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:55.640 [2024-10-01 20:30:50.682046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:55.640 [2024-10-01 20:30:50.682052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:55.640 [2024-10-01 20:30:50.682057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:55.640 [2024-10-01 20:30:50.682062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:55.640 [2024-10-01 20:30:50.682067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:55.640 [2024-10-01 20:30:50.682072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:55.640 [2024-10-01 20:30:50.682077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:55.640 [2024-10-01 20:30:50.682082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:55.640 [2024-10-01 20:30:50.682087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:55.640 [2024-10-01 20:30:50.682092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:55.640 [2024-10-01 20:30:50.682097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:55.640 [2024-10-01 20:30:50.682104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:55.640 [2024-10-01 20:30:50.682109] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:55.640 [2024-10-01 20:30:50.682116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:55.640 [2024-10-01 20:30:50.682121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:55.640 [2024-10-01 20:30:50.682127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:55.640 [2024-10-01 20:30:50.682133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:55.640 [2024-10-01 20:30:50.682139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:55.640 [2024-10-01 20:30:50.682144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:55.640 [2024-10-01 20:30:50.682149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:55.640 [2024-10-01 20:30:50.682154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:55.640 [2024-10-01 20:30:50.682160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:55.640 [2024-10-01 20:30:50.682167] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:55.640 [2024-10-01 20:30:50.682176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:55.640 [2024-10-01 20:30:50.682183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:55.640 [2024-10-01 20:30:50.682188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:55.640 [2024-10-01 20:30:50.682194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:55.640 [2024-10-01 20:30:50.682200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:55.640 [2024-10-01 20:30:50.682205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:55.640 [2024-10-01 20:30:50.682211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:55.640 [2024-10-01 20:30:50.682216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:55.640 [2024-10-01 20:30:50.682222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:55.640 [2024-10-01 20:30:50.682227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:55.640 [2024-10-01 20:30:50.682232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:55.640 [2024-10-01 20:30:50.682238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:55.640 [2024-10-01 20:30:50.682243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:55.640 [2024-10-01 20:30:50.682249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:55.640 [2024-10-01 20:30:50.682255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:55.640 [2024-10-01 20:30:50.682261] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:55.640 [2024-10-01 20:30:50.682267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:55.640 [2024-10-01 20:30:50.682273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:55.640 [2024-10-01 20:30:50.682278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:55.640 [2024-10-01 20:30:50.682284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:55.640 [2024-10-01 20:30:50.682291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:55.640 [2024-10-01 20:30:50.682297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.640 [2024-10-01 20:30:50.682305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:55.640 [2024-10-01 20:30:50.682310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:32:55.640 [2024-10-01 20:30:50.682316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.640 [2024-10-01 20:30:50.705857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.640 [2024-10-01 20:30:50.705891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:55.640 [2024-10-01 20:30:50.705900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.489 ms 00:32:55.640 [2024-10-01 20:30:50.705908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.640 [2024-10-01 20:30:50.706011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.640 [2024-10-01 20:30:50.706019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:55.640 [2024-10-01 20:30:50.706026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:32:55.640 [2024-10-01 20:30:50.706032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.640 [2024-10-01 20:30:50.730752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.640 [2024-10-01 20:30:50.730783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:55.640 [2024-10-01 20:30:50.730791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.703 ms 00:32:55.640 [2024-10-01 20:30:50.730798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.640 [2024-10-01 20:30:50.730857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.640 [2024-10-01 20:30:50.730865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:55.640 [2024-10-01 20:30:50.730872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:55.640 [2024-10-01 20:30:50.730878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.640 [2024-10-01 20:30:50.731176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.640 [2024-10-01 20:30:50.731194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:55.640 [2024-10-01 20:30:50.731202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:32:55.640 [2024-10-01 20:30:50.731208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.640 [2024-10-01 20:30:50.731319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.640 [2024-10-01 20:30:50.731326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:55.640 [2024-10-01 20:30:50.731333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:32:55.640 [2024-10-01 20:30:50.731338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.640 [2024-10-01 20:30:50.742105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.640 [2024-10-01 20:30:50.742131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:55.640 [2024-10-01 20:30:50.742139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.750 ms 00:32:55.640 [2024-10-01 20:30:50.742146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.640 [2024-10-01 20:30:50.752036] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:55.640 [2024-10-01 20:30:50.752065] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:55.640 [2024-10-01 20:30:50.752074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.640 [2024-10-01 20:30:50.752080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:55.640 [2024-10-01 20:30:50.752087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.850 ms 00:32:55.640 [2024-10-01 20:30:50.752093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.640 [2024-10-01 20:30:50.771106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.640 [2024-10-01 20:30:50.771136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:55.640 [2024-10-01 20:30:50.771152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.943 ms 00:32:55.640 [2024-10-01 20:30:50.771159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.640 [2024-10-01 20:30:50.780234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.641 [2024-10-01 20:30:50.780260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:55.641 [2024-10-01 20:30:50.780268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.012 ms 00:32:55.641 [2024-10-01 20:30:50.780274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.641 [2024-10-01 20:30:50.788920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.641 [2024-10-01 20:30:50.788944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:55.641 [2024-10-01 20:30:50.788952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.601 ms 00:32:55.641 [2024-10-01 20:30:50.788959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.641 [2024-10-01 20:30:50.789439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.641 [2024-10-01 20:30:50.789456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:55.641 [2024-10-01 20:30:50.789464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:32:55.641 [2024-10-01 20:30:50.789470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.641 [2024-10-01 20:30:50.834492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.641 [2024-10-01 20:30:50.834534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:55.641 [2024-10-01 20:30:50.834545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.003 ms 00:32:55.641 [2024-10-01 20:30:50.834553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.641 [2024-10-01 20:30:50.842843] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:55.899 [2024-10-01 20:30:50.855506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.899 [2024-10-01 20:30:50.855545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:55.899 [2024-10-01 20:30:50.855555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.868 ms 00:32:55.899 [2024-10-01 20:30:50.855562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.899 [2024-10-01 20:30:50.855655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.899 [2024-10-01 20:30:50.855663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:55.899 [2024-10-01 20:30:50.855670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:55.899 [2024-10-01 20:30:50.855677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.899 [2024-10-01 20:30:50.855735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.899 [2024-10-01 20:30:50.855745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:55.899 [2024-10-01 20:30:50.855752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:55.899 [2024-10-01 20:30:50.855758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.899 [2024-10-01 20:30:50.855775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.900 [2024-10-01 20:30:50.855782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:55.900 [2024-10-01 20:30:50.855788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:55.900 [2024-10-01 20:30:50.855793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.900 [2024-10-01 20:30:50.855819] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:55.900 [2024-10-01 20:30:50.855827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.900 [2024-10-01 20:30:50.855835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:55.900 [2024-10-01 20:30:50.855841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:55.900 [2024-10-01 20:30:50.855847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.900 [2024-10-01 20:30:50.874422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.900 [2024-10-01 20:30:50.874453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:55.900 [2024-10-01 20:30:50.874462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.559 ms 00:32:55.900 [2024-10-01 20:30:50.874469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.900 [2024-10-01 20:30:50.874549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.900 [2024-10-01 20:30:50.874557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:55.900 [2024-10-01 20:30:50.874564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:55.900 [2024-10-01 20:30:50.874570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.900 [2024-10-01 20:30:50.875259] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:55.900 [2024-10-01 20:30:50.877576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 215.569 ms, result 0 00:32:55.900 [2024-10-01 20:30:50.878072] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:55.900 [2024-10-01 20:30:50.893026] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:02.296  Copying: 45/256 [MB] (45 MBps) Copying: 85/256 [MB] (39 MBps) Copying: 128/256 [MB] (43 MBps) Copying: 170/256 [MB] (41 MBps) Copying: 213/256 [MB] (43 MBps) Copying: 256/256 [MB] (average 42 MBps)[2024-10-01 20:30:57.333384] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:02.296 [2024-10-01 20:30:57.343182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.296 [2024-10-01 20:30:57.343219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:02.296 [2024-10-01 20:30:57.343230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:02.296 [2024-10-01 20:30:57.343237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.296 [2024-10-01 20:30:57.343257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:33:02.296 [2024-10-01 20:30:57.345447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.296 [2024-10-01 20:30:57.345476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:02.296 [2024-10-01 20:30:57.345484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.179 ms 00:33:02.296 [2024-10-01 20:30:57.345491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.296 [2024-10-01 20:30:57.345722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.296 [2024-10-01 20:30:57.345741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:02.296 [2024-10-01 20:30:57.345749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:33:02.296 [2024-10-01 20:30:57.345756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.296 [2024-10-01 20:30:57.348634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.296 [2024-10-01 20:30:57.348653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:02.296 [2024-10-01 20:30:57.348661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.865 ms 00:33:02.296 [2024-10-01 20:30:57.348667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.296 [2024-10-01 20:30:57.354911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.296 [2024-10-01 20:30:57.354951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:02.296 [2024-10-01 20:30:57.354966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.227 ms 00:33:02.296 [2024-10-01 20:30:57.354974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.296 [2024-10-01 20:30:57.373708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.296 [2024-10-01 20:30:57.373742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:02.296 [2024-10-01 20:30:57.373752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.676 ms 00:33:02.296 [2024-10-01 20:30:57.373758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.296 [2024-10-01 20:30:57.384650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.296 [2024-10-01 20:30:57.384680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:02.296 [2024-10-01 20:30:57.384696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.872 ms 00:33:02.296 [2024-10-01 20:30:57.384703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.296 [2024-10-01 20:30:57.384806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.296 [2024-10-01 20:30:57.384813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:02.296 [2024-10-01 20:30:57.384821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:33:02.296 [2024-10-01 20:30:57.384827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.296 [2024-10-01 20:30:57.402843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.296 [2024-10-01 20:30:57.402872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:02.296 [2024-10-01 20:30:57.402881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.999 ms 00:33:02.296 [2024-10-01 20:30:57.402887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.296 [2024-10-01 20:30:57.420685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.296 [2024-10-01 20:30:57.420720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:02.296 [2024-10-01 20:30:57.420727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.770 ms 00:33:02.296 [2024-10-01 20:30:57.420733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.296 [2024-10-01 20:30:57.438094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.296 [2024-10-01 20:30:57.438123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:02.296 [2024-10-01 20:30:57.438131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.342 ms 00:33:02.296 [2024-10-01 20:30:57.438137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.296 [2024-10-01 20:30:57.455985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.296 [2024-10-01 20:30:57.456020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:02.296 [2024-10-01 20:30:57.456028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.791 ms 00:33:02.296 [2024-10-01 20:30:57.456034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.296 [2024-10-01 20:30:57.456054] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:02.296 [2024-10-01 20:30:57.456065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:02.296 [2024-10-01 20:30:57.456075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:02.296 [2024-10-01 20:30:57.456081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:02.296 [2024-10-01 20:30:57.456087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:02.296 [2024-10-01 20:30:57.456093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:02.296 [2024-10-01 20:30:57.456100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:02.296 [2024-10-01 20:30:57.456106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:02.296 [2024-10-01 20:30:57.456112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:02.297 [2024-10-01 20:30:57.456662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:02.298 [2024-10-01 20:30:57.456675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:02.298 [2024-10-01 20:30:57.456688] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:02.298 [2024-10-01 20:30:57.457106] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc42671f-8b04-4386-a6b4-5ad0168aad47 00:33:02.298 [2024-10-01 20:30:57.457116] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:02.298 [2024-10-01 20:30:57.457124] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:02.298 [2024-10-01 20:30:57.457130] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:02.298 [2024-10-01 20:30:57.457142] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:02.298 [2024-10-01 20:30:57.457148] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:02.298 [2024-10-01 20:30:57.457155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:02.298 [2024-10-01 20:30:57.457161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:02.298 [2024-10-01 20:30:57.457167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:02.298 [2024-10-01 20:30:57.457172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:02.298 [2024-10-01 20:30:57.457180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.298 [2024-10-01 20:30:57.457187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:02.298 [2024-10-01 20:30:57.457194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:33:02.298 [2024-10-01 20:30:57.457201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.298 [2024-10-01 20:30:57.467395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.298 [2024-10-01 20:30:57.467435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:02.298 [2024-10-01 20:30:57.467446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.169 ms 00:33:02.298 [2024-10-01 20:30:57.467453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.298 [2024-10-01 20:30:57.467772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.298 [2024-10-01 20:30:57.467787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:02.298 [2024-10-01 20:30:57.467794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:33:02.298 [2024-10-01 20:30:57.467800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.298 [2024-10-01 20:30:57.494437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.298 [2024-10-01 20:30:57.494483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:02.298 [2024-10-01 20:30:57.494492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.298 [2024-10-01 20:30:57.494498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.298 [2024-10-01 20:30:57.494574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.298 [2024-10-01 20:30:57.494581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:02.298 [2024-10-01 20:30:57.494589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.298 [2024-10-01 20:30:57.494596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.298 [2024-10-01 20:30:57.494630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.298 [2024-10-01 20:30:57.494640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:02.298 [2024-10-01 20:30:57.494647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.298 [2024-10-01 20:30:57.494653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.298 [2024-10-01 20:30:57.494668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.298 [2024-10-01 20:30:57.494674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:02.298 [2024-10-01 20:30:57.494680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.298 [2024-10-01 20:30:57.494686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.556 [2024-10-01 20:30:57.560934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.556 [2024-10-01 20:30:57.560985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:02.556 [2024-10-01 20:30:57.560995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.556 [2024-10-01 20:30:57.561002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.556 [2024-10-01 20:30:57.613226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.556 [2024-10-01 20:30:57.613272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:02.556 [2024-10-01 20:30:57.613280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.556 [2024-10-01 20:30:57.613287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.556 [2024-10-01 20:30:57.613334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.556 [2024-10-01 20:30:57.613341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:02.556 [2024-10-01 20:30:57.613352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.556 [2024-10-01 20:30:57.613358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.556 [2024-10-01 20:30:57.613383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.556 [2024-10-01 20:30:57.613390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:02.556 [2024-10-01 20:30:57.613396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.556 [2024-10-01 20:30:57.613402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.556 [2024-10-01 20:30:57.613476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.556 [2024-10-01 20:30:57.613484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:02.556 [2024-10-01 20:30:57.613491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.556 [2024-10-01 20:30:57.613499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.556 [2024-10-01 20:30:57.613523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.556 [2024-10-01 20:30:57.613530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:02.556 [2024-10-01 20:30:57.613537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.556 [2024-10-01 20:30:57.613543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.556 [2024-10-01 20:30:57.613573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.556 [2024-10-01 20:30:57.613580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:02.556 [2024-10-01 20:30:57.613586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.556 [2024-10-01 20:30:57.613594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.556 [2024-10-01 20:30:57.613629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:02.556 [2024-10-01 20:30:57.613637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:02.556 [2024-10-01 20:30:57.613643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:02.556 [2024-10-01 20:30:57.613649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.556 [2024-10-01 20:30:57.613775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 270.591 ms, result 0 00:33:03.491 00:33:03.491 00:33:03.491 20:30:58 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:04.055 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:33:04.055 20:30:59 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:33:04.055 20:30:59 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:33:04.055 20:30:59 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:04.055 20:30:59 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:04.055 20:30:59 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:33:04.055 20:30:59 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:33:04.055 20:30:59 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74955 00:33:04.055 20:30:59 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74955 ']' 00:33:04.055 20:30:59 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74955 00:33:04.055 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74955) - No such process 00:33:04.055 Process with pid 74955 is not found 00:33:04.055 20:30:59 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 74955 is not found' 00:33:04.055 ************************************ 00:33:04.055 END TEST ftl_trim 00:33:04.055 ************************************ 00:33:04.055 00:33:04.055 real 0m59.200s 00:33:04.055 user 1m33.843s 00:33:04.055 sys 0m5.641s 00:33:04.055 20:30:59 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:04.055 20:30:59 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:33:04.055 20:30:59 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:33:04.055 20:30:59 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:33:04.055 20:30:59 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:04.055 20:30:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:04.055 ************************************ 00:33:04.055 START TEST ftl_restore 00:33:04.056 ************************************ 00:33:04.056 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:33:04.313 * Looking for test storage... 00:33:04.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:04.313 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:04.313 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:33:04.313 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:04.313 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:04.313 20:30:59 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:33:04.313 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:04.313 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.313 --rc genhtml_branch_coverage=1 00:33:04.313 --rc genhtml_function_coverage=1 00:33:04.313 --rc genhtml_legend=1 00:33:04.313 --rc geninfo_all_blocks=1 00:33:04.313 --rc geninfo_unexecuted_blocks=1 00:33:04.313 00:33:04.313 ' 00:33:04.313 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.313 --rc genhtml_branch_coverage=1 00:33:04.313 --rc genhtml_function_coverage=1 00:33:04.313 --rc genhtml_legend=1 00:33:04.313 --rc geninfo_all_blocks=1 00:33:04.313 --rc geninfo_unexecuted_blocks=1 00:33:04.313 00:33:04.313 ' 00:33:04.313 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.313 --rc genhtml_branch_coverage=1 00:33:04.313 --rc genhtml_function_coverage=1 00:33:04.313 --rc genhtml_legend=1 00:33:04.313 --rc geninfo_all_blocks=1 00:33:04.314 --rc geninfo_unexecuted_blocks=1 00:33:04.314 00:33:04.314 ' 00:33:04.314 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:04.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.314 --rc genhtml_branch_coverage=1 00:33:04.314 --rc genhtml_function_coverage=1 00:33:04.314 --rc genhtml_legend=1 00:33:04.314 --rc geninfo_all_blocks=1 00:33:04.314 --rc geninfo_unexecuted_blocks=1 00:33:04.314 00:33:04.314 ' 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.4B1sgTKXBJ 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=75184 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 75184 00:33:04.314 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 75184 ']' 00:33:04.314 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.314 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:04.314 20:30:59 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:04.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.314 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.314 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:04.314 20:30:59 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:33:04.314 [2024-10-01 20:30:59.492854] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:33:04.314 [2024-10-01 20:30:59.492977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75184 ] 00:33:04.572 [2024-10-01 20:30:59.635144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.830 [2024-10-01 20:30:59.830288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.765 20:31:00 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:05.765 20:31:00 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:33:05.765 20:31:00 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:33:05.765 20:31:00 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:33:05.765 20:31:00 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:05.765 20:31:00 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:33:05.765 20:31:00 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:33:05.765 20:31:00 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:05.765 20:31:00 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:33:05.765 20:31:00 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:33:05.765 20:31:00 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:33:05.765 20:31:00 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:33:05.765 20:31:00 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:05.765 20:31:00 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:33:05.765 20:31:00 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:33:05.765 20:31:00 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:33:06.022 20:31:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:06.022 { 00:33:06.022 "name": "nvme0n1", 00:33:06.022 "aliases": [ 00:33:06.022 "df0fbdee-bbcb-4331-ae8c-93c7a5cbcdbc" 00:33:06.022 ], 00:33:06.022 "product_name": "NVMe disk", 00:33:06.022 "block_size": 4096, 00:33:06.022 "num_blocks": 1310720, 00:33:06.022 "uuid": "df0fbdee-bbcb-4331-ae8c-93c7a5cbcdbc", 00:33:06.022 "numa_id": -1, 00:33:06.022 "assigned_rate_limits": { 00:33:06.022 "rw_ios_per_sec": 0, 00:33:06.022 "rw_mbytes_per_sec": 0, 00:33:06.022 "r_mbytes_per_sec": 0, 00:33:06.022 "w_mbytes_per_sec": 0 00:33:06.022 }, 00:33:06.022 "claimed": true, 00:33:06.022 "claim_type": "read_many_write_one", 00:33:06.022 "zoned": false, 00:33:06.022 "supported_io_types": { 00:33:06.022 "read": true, 00:33:06.022 "write": true, 00:33:06.022 "unmap": true, 00:33:06.022 "flush": true, 00:33:06.022 "reset": true, 00:33:06.022 "nvme_admin": true, 00:33:06.022 "nvme_io": true, 00:33:06.022 "nvme_io_md": false, 00:33:06.022 "write_zeroes": true, 00:33:06.022 "zcopy": false, 00:33:06.022 "get_zone_info": false, 00:33:06.022 "zone_management": false, 00:33:06.022 "zone_append": false, 00:33:06.022 "compare": true, 00:33:06.022 "compare_and_write": false, 00:33:06.022 "abort": true, 00:33:06.022 "seek_hole": false, 00:33:06.022 "seek_data": false, 00:33:06.022 "copy": true, 00:33:06.022 "nvme_iov_md": false 00:33:06.022 }, 00:33:06.022 "driver_specific": { 00:33:06.022 "nvme": [ 00:33:06.022 { 00:33:06.022 "pci_address": "0000:00:11.0", 00:33:06.022 "trid": { 00:33:06.022 "trtype": "PCIe", 00:33:06.022 "traddr": "0000:00:11.0" 00:33:06.022 }, 00:33:06.022 "ctrlr_data": { 00:33:06.022 "cntlid": 0, 00:33:06.022 "vendor_id": "0x1b36", 00:33:06.022 "model_number": "QEMU NVMe Ctrl", 00:33:06.022 "serial_number": "12341", 00:33:06.022 "firmware_revision": "8.0.0", 00:33:06.022 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:06.022 "oacs": { 00:33:06.022 "security": 0, 00:33:06.022 "format": 1, 00:33:06.022 "firmware": 0, 00:33:06.022 "ns_manage": 1 00:33:06.022 }, 00:33:06.022 "multi_ctrlr": false, 00:33:06.022 "ana_reporting": false 00:33:06.022 }, 00:33:06.022 "vs": { 00:33:06.022 "nvme_version": "1.4" 00:33:06.022 }, 00:33:06.022 "ns_data": { 00:33:06.022 "id": 1, 00:33:06.022 "can_share": false 00:33:06.022 } 00:33:06.022 } 00:33:06.022 ], 00:33:06.022 "mp_policy": "active_passive" 00:33:06.022 } 00:33:06.022 } 00:33:06.022 ]' 00:33:06.022 20:31:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:06.022 20:31:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:33:06.023 20:31:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:06.023 20:31:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:33:06.023 20:31:01 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:33:06.023 20:31:01 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:33:06.023 20:31:01 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:33:06.023 20:31:01 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:33:06.023 20:31:01 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:33:06.023 20:31:01 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:06.023 20:31:01 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:06.279 20:31:01 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=23cec6d0-098c-4ef7-a292-af48d46ea47a 00:33:06.279 20:31:01 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:33:06.279 20:31:01 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 23cec6d0-098c-4ef7-a292-af48d46ea47a 00:33:06.537 20:31:01 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:33:06.795 20:31:01 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=ac5cf75c-ce54-470b-b047-701a850752e2 00:33:06.795 20:31:01 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ac5cf75c-ce54-470b-b047-701a850752e2 00:33:07.053 20:31:02 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=ca030ff5-6c43-44b3-95c3-bea746296f48 00:33:07.053 20:31:02 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:33:07.053 20:31:02 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ca030ff5-6c43-44b3-95c3-bea746296f48 00:33:07.053 20:31:02 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:33:07.053 20:31:02 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:07.053 20:31:02 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=ca030ff5-6c43-44b3-95c3-bea746296f48 00:33:07.053 20:31:02 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:33:07.053 20:31:02 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size ca030ff5-6c43-44b3-95c3-bea746296f48 00:33:07.053 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ca030ff5-6c43-44b3-95c3-bea746296f48 00:33:07.053 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:07.053 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:33:07.053 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:33:07.053 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca030ff5-6c43-44b3-95c3-bea746296f48 00:33:07.311 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:07.311 { 00:33:07.311 "name": "ca030ff5-6c43-44b3-95c3-bea746296f48", 00:33:07.311 "aliases": [ 00:33:07.312 "lvs/nvme0n1p0" 00:33:07.312 ], 00:33:07.312 "product_name": "Logical Volume", 00:33:07.312 "block_size": 4096, 00:33:07.312 "num_blocks": 26476544, 00:33:07.312 "uuid": "ca030ff5-6c43-44b3-95c3-bea746296f48", 00:33:07.312 "assigned_rate_limits": { 00:33:07.312 "rw_ios_per_sec": 0, 00:33:07.312 "rw_mbytes_per_sec": 0, 00:33:07.312 "r_mbytes_per_sec": 0, 00:33:07.312 "w_mbytes_per_sec": 0 00:33:07.312 }, 00:33:07.312 "claimed": false, 00:33:07.312 "zoned": false, 00:33:07.312 "supported_io_types": { 00:33:07.312 "read": true, 00:33:07.312 "write": true, 00:33:07.312 "unmap": true, 00:33:07.312 "flush": false, 00:33:07.312 "reset": true, 00:33:07.312 "nvme_admin": false, 00:33:07.312 "nvme_io": false, 00:33:07.312 "nvme_io_md": false, 00:33:07.312 "write_zeroes": true, 00:33:07.312 "zcopy": false, 00:33:07.312 "get_zone_info": false, 00:33:07.312 "zone_management": false, 00:33:07.312 "zone_append": false, 00:33:07.312 "compare": false, 00:33:07.312 "compare_and_write": false, 00:33:07.312 "abort": false, 00:33:07.312 "seek_hole": true, 00:33:07.312 "seek_data": true, 00:33:07.312 "copy": false, 00:33:07.312 "nvme_iov_md": false 00:33:07.312 }, 00:33:07.312 "driver_specific": { 00:33:07.312 "lvol": { 00:33:07.312 "lvol_store_uuid": "ac5cf75c-ce54-470b-b047-701a850752e2", 00:33:07.312 "base_bdev": "nvme0n1", 00:33:07.312 "thin_provision": true, 00:33:07.312 "num_allocated_clusters": 0, 00:33:07.312 "snapshot": false, 00:33:07.312 "clone": false, 00:33:07.312 "esnap_clone": false 00:33:07.312 } 00:33:07.312 } 00:33:07.312 } 00:33:07.312 ]' 00:33:07.312 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:07.312 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:33:07.312 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:07.312 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:33:07.312 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:33:07.312 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:33:07.312 20:31:02 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:33:07.312 20:31:02 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:33:07.312 20:31:02 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:33:07.569 20:31:02 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:33:07.569 20:31:02 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:33:07.569 20:31:02 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size ca030ff5-6c43-44b3-95c3-bea746296f48 00:33:07.569 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ca030ff5-6c43-44b3-95c3-bea746296f48 00:33:07.569 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:07.569 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:33:07.569 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:33:07.569 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca030ff5-6c43-44b3-95c3-bea746296f48 00:33:07.827 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:07.827 { 00:33:07.827 "name": "ca030ff5-6c43-44b3-95c3-bea746296f48", 00:33:07.827 "aliases": [ 00:33:07.827 "lvs/nvme0n1p0" 00:33:07.827 ], 00:33:07.827 "product_name": "Logical Volume", 00:33:07.827 "block_size": 4096, 00:33:07.827 "num_blocks": 26476544, 00:33:07.827 "uuid": "ca030ff5-6c43-44b3-95c3-bea746296f48", 00:33:07.827 "assigned_rate_limits": { 00:33:07.827 "rw_ios_per_sec": 0, 00:33:07.827 "rw_mbytes_per_sec": 0, 00:33:07.827 "r_mbytes_per_sec": 0, 00:33:07.827 "w_mbytes_per_sec": 0 00:33:07.827 }, 00:33:07.827 "claimed": false, 00:33:07.827 "zoned": false, 00:33:07.827 "supported_io_types": { 00:33:07.827 "read": true, 00:33:07.827 "write": true, 00:33:07.827 "unmap": true, 00:33:07.827 "flush": false, 00:33:07.827 "reset": true, 00:33:07.827 "nvme_admin": false, 00:33:07.827 "nvme_io": false, 00:33:07.827 "nvme_io_md": false, 00:33:07.827 "write_zeroes": true, 00:33:07.827 "zcopy": false, 00:33:07.827 "get_zone_info": false, 00:33:07.827 "zone_management": false, 00:33:07.827 "zone_append": false, 00:33:07.827 "compare": false, 00:33:07.827 "compare_and_write": false, 00:33:07.827 "abort": false, 00:33:07.827 "seek_hole": true, 00:33:07.827 "seek_data": true, 00:33:07.827 "copy": false, 00:33:07.827 "nvme_iov_md": false 00:33:07.827 }, 00:33:07.827 "driver_specific": { 00:33:07.827 "lvol": { 00:33:07.827 "lvol_store_uuid": "ac5cf75c-ce54-470b-b047-701a850752e2", 00:33:07.827 "base_bdev": "nvme0n1", 00:33:07.827 "thin_provision": true, 00:33:07.827 "num_allocated_clusters": 0, 00:33:07.827 "snapshot": false, 00:33:07.827 "clone": false, 00:33:07.827 "esnap_clone": false 00:33:07.827 } 00:33:07.827 } 00:33:07.827 } 00:33:07.827 ]' 00:33:07.827 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:07.827 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:33:07.827 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:07.827 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:33:07.827 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:33:07.827 20:31:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:33:07.827 20:31:02 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:33:07.827 20:31:02 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:33:08.086 20:31:03 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:33:08.086 20:31:03 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size ca030ff5-6c43-44b3-95c3-bea746296f48 00:33:08.086 20:31:03 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ca030ff5-6c43-44b3-95c3-bea746296f48 00:33:08.086 20:31:03 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:08.086 20:31:03 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:33:08.086 20:31:03 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:33:08.086 20:31:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca030ff5-6c43-44b3-95c3-bea746296f48 00:33:08.344 20:31:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:08.344 { 00:33:08.344 "name": "ca030ff5-6c43-44b3-95c3-bea746296f48", 00:33:08.344 "aliases": [ 00:33:08.344 "lvs/nvme0n1p0" 00:33:08.344 ], 00:33:08.344 "product_name": "Logical Volume", 00:33:08.344 "block_size": 4096, 00:33:08.344 "num_blocks": 26476544, 00:33:08.344 "uuid": "ca030ff5-6c43-44b3-95c3-bea746296f48", 00:33:08.344 "assigned_rate_limits": { 00:33:08.344 "rw_ios_per_sec": 0, 00:33:08.344 "rw_mbytes_per_sec": 0, 00:33:08.344 "r_mbytes_per_sec": 0, 00:33:08.344 "w_mbytes_per_sec": 0 00:33:08.344 }, 00:33:08.344 "claimed": false, 00:33:08.344 "zoned": false, 00:33:08.344 "supported_io_types": { 00:33:08.344 "read": true, 00:33:08.344 "write": true, 00:33:08.344 "unmap": true, 00:33:08.344 "flush": false, 00:33:08.344 "reset": true, 00:33:08.344 "nvme_admin": false, 00:33:08.344 "nvme_io": false, 00:33:08.344 "nvme_io_md": false, 00:33:08.344 "write_zeroes": true, 00:33:08.344 "zcopy": false, 00:33:08.344 "get_zone_info": false, 00:33:08.344 "zone_management": false, 00:33:08.344 "zone_append": false, 00:33:08.344 "compare": false, 00:33:08.344 "compare_and_write": false, 00:33:08.344 "abort": false, 00:33:08.344 "seek_hole": true, 00:33:08.344 "seek_data": true, 00:33:08.344 "copy": false, 00:33:08.344 "nvme_iov_md": false 00:33:08.344 }, 00:33:08.344 "driver_specific": { 00:33:08.344 "lvol": { 00:33:08.344 "lvol_store_uuid": "ac5cf75c-ce54-470b-b047-701a850752e2", 00:33:08.344 "base_bdev": "nvme0n1", 00:33:08.344 "thin_provision": true, 00:33:08.344 "num_allocated_clusters": 0, 00:33:08.344 "snapshot": false, 00:33:08.344 "clone": false, 00:33:08.344 "esnap_clone": false 00:33:08.344 } 00:33:08.344 } 00:33:08.344 } 00:33:08.344 ]' 00:33:08.344 20:31:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:08.344 20:31:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:33:08.344 20:31:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:08.344 20:31:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:33:08.344 20:31:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:33:08.344 20:31:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:33:08.344 20:31:03 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:33:08.345 20:31:03 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ca030ff5-6c43-44b3-95c3-bea746296f48 --l2p_dram_limit 10' 00:33:08.345 20:31:03 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:33:08.345 20:31:03 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:33:08.345 20:31:03 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:33:08.345 20:31:03 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:33:08.345 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:33:08.345 20:31:03 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ca030ff5-6c43-44b3-95c3-bea746296f48 --l2p_dram_limit 10 -c nvc0n1p0 00:33:08.604 [2024-10-01 20:31:03.729660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.604 [2024-10-01 20:31:03.729726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:08.604 [2024-10-01 20:31:03.729741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:08.604 [2024-10-01 20:31:03.729748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.604 [2024-10-01 20:31:03.729797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.604 [2024-10-01 20:31:03.729805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:08.604 [2024-10-01 20:31:03.729813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:08.604 [2024-10-01 20:31:03.729819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.604 [2024-10-01 20:31:03.729844] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:08.604 [2024-10-01 20:31:03.731894] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:08.604 [2024-10-01 20:31:03.731930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.604 [2024-10-01 20:31:03.731940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:08.604 [2024-10-01 20:31:03.731951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.093 ms 00:33:08.604 [2024-10-01 20:31:03.731957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.604 [2024-10-01 20:31:03.732022] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 73992bbb-6107-4f6f-8507-ce356e0255e3 00:33:08.604 [2024-10-01 20:31:03.733123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.604 [2024-10-01 20:31:03.733158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:33:08.604 [2024-10-01 20:31:03.733167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:33:08.604 [2024-10-01 20:31:03.733177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.604 [2024-10-01 20:31:03.738441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.604 [2024-10-01 20:31:03.738475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:08.604 [2024-10-01 20:31:03.738483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.228 ms 00:33:08.604 [2024-10-01 20:31:03.738490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.604 [2024-10-01 20:31:03.738566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.604 [2024-10-01 20:31:03.738576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:08.604 [2024-10-01 20:31:03.738585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:33:08.604 [2024-10-01 20:31:03.738595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.604 [2024-10-01 20:31:03.738642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.604 [2024-10-01 20:31:03.738651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:08.604 [2024-10-01 20:31:03.738658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:08.604 [2024-10-01 20:31:03.738665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.604 [2024-10-01 20:31:03.738683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:08.604 [2024-10-01 20:31:03.741769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.604 [2024-10-01 20:31:03.741798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:08.604 [2024-10-01 20:31:03.741808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.089 ms 00:33:08.604 [2024-10-01 20:31:03.741814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.604 [2024-10-01 20:31:03.741844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.604 [2024-10-01 20:31:03.741851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:08.604 [2024-10-01 20:31:03.741861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:08.604 [2024-10-01 20:31:03.741867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.604 [2024-10-01 20:31:03.741892] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:33:08.604 [2024-10-01 20:31:03.742000] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:08.604 [2024-10-01 20:31:03.742012] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:08.604 [2024-10-01 20:31:03.742023] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:08.604 [2024-10-01 20:31:03.742032] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:08.604 [2024-10-01 20:31:03.742039] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:08.604 [2024-10-01 20:31:03.742046] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:08.604 [2024-10-01 20:31:03.742052] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:08.604 [2024-10-01 20:31:03.742059] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:08.604 [2024-10-01 20:31:03.742065] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:08.604 [2024-10-01 20:31:03.742072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.604 [2024-10-01 20:31:03.742084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:08.604 [2024-10-01 20:31:03.742092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:33:08.604 [2024-10-01 20:31:03.742100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.604 [2024-10-01 20:31:03.742168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.604 [2024-10-01 20:31:03.742174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:08.604 [2024-10-01 20:31:03.742182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:33:08.604 [2024-10-01 20:31:03.742187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.604 [2024-10-01 20:31:03.742266] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:08.604 [2024-10-01 20:31:03.742273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:08.604 [2024-10-01 20:31:03.742280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:08.604 [2024-10-01 20:31:03.742287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.604 [2024-10-01 20:31:03.742295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:08.604 [2024-10-01 20:31:03.742300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:08.604 [2024-10-01 20:31:03.742307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:08.604 [2024-10-01 20:31:03.742312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:08.604 [2024-10-01 20:31:03.742319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:08.605 [2024-10-01 20:31:03.742324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:08.605 [2024-10-01 20:31:03.742331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:08.605 [2024-10-01 20:31:03.742336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:08.605 [2024-10-01 20:31:03.742343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:08.605 [2024-10-01 20:31:03.742348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:08.605 [2024-10-01 20:31:03.742354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:08.605 [2024-10-01 20:31:03.742359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.605 [2024-10-01 20:31:03.742367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:08.605 [2024-10-01 20:31:03.742373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:08.605 [2024-10-01 20:31:03.742379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.605 [2024-10-01 20:31:03.742384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:08.605 [2024-10-01 20:31:03.742392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:08.605 [2024-10-01 20:31:03.742397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.605 [2024-10-01 20:31:03.742404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:08.605 [2024-10-01 20:31:03.742410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:08.605 [2024-10-01 20:31:03.742417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.605 [2024-10-01 20:31:03.742422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:08.605 [2024-10-01 20:31:03.742429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:08.605 [2024-10-01 20:31:03.742434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.605 [2024-10-01 20:31:03.742441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:08.605 [2024-10-01 20:31:03.742446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:08.605 [2024-10-01 20:31:03.742453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.605 [2024-10-01 20:31:03.742458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:08.605 [2024-10-01 20:31:03.742466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:08.605 [2024-10-01 20:31:03.742471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:08.605 [2024-10-01 20:31:03.742478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:08.605 [2024-10-01 20:31:03.742483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:08.605 [2024-10-01 20:31:03.742489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:08.605 [2024-10-01 20:31:03.742495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:08.605 [2024-10-01 20:31:03.742501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:08.605 [2024-10-01 20:31:03.742506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.605 [2024-10-01 20:31:03.742512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:08.605 [2024-10-01 20:31:03.742517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:08.605 [2024-10-01 20:31:03.742524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.605 [2024-10-01 20:31:03.742528] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:08.605 [2024-10-01 20:31:03.742538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:08.605 [2024-10-01 20:31:03.742543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:08.605 [2024-10-01 20:31:03.742551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.605 [2024-10-01 20:31:03.742557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:08.605 [2024-10-01 20:31:03.742565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:08.605 [2024-10-01 20:31:03.742570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:08.605 [2024-10-01 20:31:03.742576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:08.605 [2024-10-01 20:31:03.742581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:08.605 [2024-10-01 20:31:03.742588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:08.605 [2024-10-01 20:31:03.742595] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:08.605 [2024-10-01 20:31:03.742604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:08.605 [2024-10-01 20:31:03.742611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:08.605 [2024-10-01 20:31:03.742619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:08.605 [2024-10-01 20:31:03.742624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:08.605 [2024-10-01 20:31:03.742631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:08.605 [2024-10-01 20:31:03.742637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:08.605 [2024-10-01 20:31:03.742644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:08.605 [2024-10-01 20:31:03.742649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:08.605 [2024-10-01 20:31:03.742656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:08.605 [2024-10-01 20:31:03.742662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:08.605 [2024-10-01 20:31:03.742670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:08.605 [2024-10-01 20:31:03.742675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:08.605 [2024-10-01 20:31:03.742682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:08.605 [2024-10-01 20:31:03.742687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:08.605 [2024-10-01 20:31:03.742706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:08.605 [2024-10-01 20:31:03.742712] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:08.605 [2024-10-01 20:31:03.742721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:08.605 [2024-10-01 20:31:03.742728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:08.605 [2024-10-01 20:31:03.742735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:08.605 [2024-10-01 20:31:03.742741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:08.605 [2024-10-01 20:31:03.742748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:08.605 [2024-10-01 20:31:03.742754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.605 [2024-10-01 20:31:03.742761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:08.605 [2024-10-01 20:31:03.742767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:33:08.605 [2024-10-01 20:31:03.742776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.605 [2024-10-01 20:31:03.742824] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:33:08.605 [2024-10-01 20:31:03.742835] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:33:10.505 [2024-10-01 20:31:05.669139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.505 [2024-10-01 20:31:05.669203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:33:10.505 [2024-10-01 20:31:05.669218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1926.320 ms 00:33:10.505 [2024-10-01 20:31:05.669229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.505 [2024-10-01 20:31:05.694859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.505 [2024-10-01 20:31:05.694913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:10.505 [2024-10-01 20:31:05.694928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.420 ms 00:33:10.505 [2024-10-01 20:31:05.694937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.505 [2024-10-01 20:31:05.695071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.505 [2024-10-01 20:31:05.695083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:10.505 [2024-10-01 20:31:05.695095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:10.505 [2024-10-01 20:31:05.695106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.763 [2024-10-01 20:31:05.726147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.763 [2024-10-01 20:31:05.726196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:10.763 [2024-10-01 20:31:05.726207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.989 ms 00:33:10.763 [2024-10-01 20:31:05.726218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.763 [2024-10-01 20:31:05.726256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.763 [2024-10-01 20:31:05.726266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:10.763 [2024-10-01 20:31:05.726274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:10.763 [2024-10-01 20:31:05.726289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.763 [2024-10-01 20:31:05.726747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.763 [2024-10-01 20:31:05.726777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:10.763 [2024-10-01 20:31:05.726787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:33:10.763 [2024-10-01 20:31:05.726797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.763 [2024-10-01 20:31:05.726912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.763 [2024-10-01 20:31:05.726923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:10.763 [2024-10-01 20:31:05.726930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:33:10.763 [2024-10-01 20:31:05.726941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.763 [2024-10-01 20:31:05.741018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.763 [2024-10-01 20:31:05.741055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:10.763 [2024-10-01 20:31:05.741066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.059 ms 00:33:10.763 [2024-10-01 20:31:05.741076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.763 [2024-10-01 20:31:05.752630] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:10.763 [2024-10-01 20:31:05.755646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.763 [2024-10-01 20:31:05.755679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:10.763 [2024-10-01 20:31:05.755703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.484 ms 00:33:10.763 [2024-10-01 20:31:05.755712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.763 [2024-10-01 20:31:05.814401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.763 [2024-10-01 20:31:05.814458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:33:10.763 [2024-10-01 20:31:05.814471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.652 ms 00:33:10.763 [2024-10-01 20:31:05.814479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.763 [2024-10-01 20:31:05.814666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.764 [2024-10-01 20:31:05.814677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:10.764 [2024-10-01 20:31:05.814699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:33:10.764 [2024-10-01 20:31:05.814708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.764 [2024-10-01 20:31:05.838113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.764 [2024-10-01 20:31:05.838160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:33:10.764 [2024-10-01 20:31:05.838176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.354 ms 00:33:10.764 [2024-10-01 20:31:05.838185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.764 [2024-10-01 20:31:05.861357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.764 [2024-10-01 20:31:05.861401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:33:10.764 [2024-10-01 20:31:05.861415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.129 ms 00:33:10.764 [2024-10-01 20:31:05.861423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.764 [2024-10-01 20:31:05.862017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.764 [2024-10-01 20:31:05.862040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:10.764 [2024-10-01 20:31:05.862051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:33:10.764 [2024-10-01 20:31:05.862059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.764 [2024-10-01 20:31:05.929472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.764 [2024-10-01 20:31:05.929523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:33:10.764 [2024-10-01 20:31:05.929544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.366 ms 00:33:10.764 [2024-10-01 20:31:05.929552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.764 [2024-10-01 20:31:05.954329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.764 [2024-10-01 20:31:05.954376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:33:10.764 [2024-10-01 20:31:05.954390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.688 ms 00:33:10.764 [2024-10-01 20:31:05.954398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.023 [2024-10-01 20:31:05.978319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.023 [2024-10-01 20:31:05.978363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:33:11.023 [2024-10-01 20:31:05.978377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.874 ms 00:33:11.023 [2024-10-01 20:31:05.978385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.023 [2024-10-01 20:31:06.001359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.023 [2024-10-01 20:31:06.001397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:11.023 [2024-10-01 20:31:06.001410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.932 ms 00:33:11.023 [2024-10-01 20:31:06.001418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.023 [2024-10-01 20:31:06.001460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.023 [2024-10-01 20:31:06.001471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:11.023 [2024-10-01 20:31:06.001484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:11.023 [2024-10-01 20:31:06.001491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.023 [2024-10-01 20:31:06.001579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.023 [2024-10-01 20:31:06.001590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:11.023 [2024-10-01 20:31:06.001599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:11.023 [2024-10-01 20:31:06.001606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.023 [2024-10-01 20:31:06.002660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2272.587 ms, result 0 00:33:11.023 { 00:33:11.023 "name": "ftl0", 00:33:11.023 "uuid": "73992bbb-6107-4f6f-8507-ce356e0255e3" 00:33:11.023 } 00:33:11.023 20:31:06 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:33:11.023 20:31:06 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:33:11.023 20:31:06 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:33:11.023 20:31:06 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:33:11.282 [2024-10-01 20:31:06.414079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.282 [2024-10-01 20:31:06.414129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:11.282 [2024-10-01 20:31:06.414140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:11.282 [2024-10-01 20:31:06.414150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.282 [2024-10-01 20:31:06.414173] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:11.282 [2024-10-01 20:31:06.416743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.282 [2024-10-01 20:31:06.416778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:11.282 [2024-10-01 20:31:06.416798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.554 ms 00:33:11.282 [2024-10-01 20:31:06.416806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.282 [2024-10-01 20:31:06.417066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.282 [2024-10-01 20:31:06.417085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:11.282 [2024-10-01 20:31:06.417097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:33:11.282 [2024-10-01 20:31:06.417104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.282 [2024-10-01 20:31:06.420334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.282 [2024-10-01 20:31:06.420356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:11.282 [2024-10-01 20:31:06.420370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.212 ms 00:33:11.282 [2024-10-01 20:31:06.420378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.282 [2024-10-01 20:31:06.426678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.282 [2024-10-01 20:31:06.426714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:11.282 [2024-10-01 20:31:06.426727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.277 ms 00:33:11.282 [2024-10-01 20:31:06.426736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.282 [2024-10-01 20:31:06.450387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.282 [2024-10-01 20:31:06.450423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:11.282 [2024-10-01 20:31:06.450436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.580 ms 00:33:11.282 [2024-10-01 20:31:06.450444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.282 [2024-10-01 20:31:06.464810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.282 [2024-10-01 20:31:06.464859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:11.282 [2024-10-01 20:31:06.464874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.320 ms 00:33:11.282 [2024-10-01 20:31:06.464881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.282 [2024-10-01 20:31:06.465053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.282 [2024-10-01 20:31:06.465064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:11.282 [2024-10-01 20:31:06.465075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:33:11.282 [2024-10-01 20:31:06.465082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.282 [2024-10-01 20:31:06.488449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.282 [2024-10-01 20:31:06.488500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:11.282 [2024-10-01 20:31:06.488514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.341 ms 00:33:11.282 [2024-10-01 20:31:06.488522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.542 [2024-10-01 20:31:06.510881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.542 [2024-10-01 20:31:06.510927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:11.542 [2024-10-01 20:31:06.510941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.304 ms 00:33:11.542 [2024-10-01 20:31:06.510948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.542 [2024-10-01 20:31:06.534250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.542 [2024-10-01 20:31:06.534312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:11.542 [2024-10-01 20:31:06.534325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.251 ms 00:33:11.542 [2024-10-01 20:31:06.534333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.542 [2024-10-01 20:31:06.556616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.542 [2024-10-01 20:31:06.556665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:11.542 [2024-10-01 20:31:06.556679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.189 ms 00:33:11.542 [2024-10-01 20:31:06.556686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.542 [2024-10-01 20:31:06.556739] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:11.542 [2024-10-01 20:31:06.556753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.556998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:11.542 [2024-10-01 20:31:06.557459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:11.543 [2024-10-01 20:31:06.557626] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:11.543 [2024-10-01 20:31:06.557635] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73992bbb-6107-4f6f-8507-ce356e0255e3 00:33:11.543 [2024-10-01 20:31:06.557643] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:11.543 [2024-10-01 20:31:06.557652] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:11.543 [2024-10-01 20:31:06.557659] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:11.543 [2024-10-01 20:31:06.557668] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:11.543 [2024-10-01 20:31:06.557675] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:11.543 [2024-10-01 20:31:06.557686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:11.543 [2024-10-01 20:31:06.557703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:11.543 [2024-10-01 20:31:06.557711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:11.543 [2024-10-01 20:31:06.557717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:11.543 [2024-10-01 20:31:06.557726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.543 [2024-10-01 20:31:06.557733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:11.543 [2024-10-01 20:31:06.557743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:33:11.543 [2024-10-01 20:31:06.557750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.570307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.543 [2024-10-01 20:31:06.570345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:11.543 [2024-10-01 20:31:06.570358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.515 ms 00:33:11.543 [2024-10-01 20:31:06.570368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.570745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.543 [2024-10-01 20:31:06.570764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:11.543 [2024-10-01 20:31:06.570774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:33:11.543 [2024-10-01 20:31:06.570782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.608378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.543 [2024-10-01 20:31:06.608420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:11.543 [2024-10-01 20:31:06.608436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.543 [2024-10-01 20:31:06.608444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.608523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.543 [2024-10-01 20:31:06.608532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:11.543 [2024-10-01 20:31:06.608541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.543 [2024-10-01 20:31:06.608548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.608639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.543 [2024-10-01 20:31:06.608650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:11.543 [2024-10-01 20:31:06.608659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.543 [2024-10-01 20:31:06.608669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.608707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.543 [2024-10-01 20:31:06.608716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:11.543 [2024-10-01 20:31:06.608725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.543 [2024-10-01 20:31:06.608733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.685917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.543 [2024-10-01 20:31:06.685974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:11.543 [2024-10-01 20:31:06.685987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.543 [2024-10-01 20:31:06.685998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.750043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.543 [2024-10-01 20:31:06.750096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:11.543 [2024-10-01 20:31:06.750108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.543 [2024-10-01 20:31:06.750117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.750195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.543 [2024-10-01 20:31:06.750205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:11.543 [2024-10-01 20:31:06.750215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.543 [2024-10-01 20:31:06.750223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.750287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.543 [2024-10-01 20:31:06.750296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:11.543 [2024-10-01 20:31:06.750306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.543 [2024-10-01 20:31:06.750313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.750409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.543 [2024-10-01 20:31:06.750419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:11.543 [2024-10-01 20:31:06.750428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.543 [2024-10-01 20:31:06.750436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.750467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.543 [2024-10-01 20:31:06.750477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:11.543 [2024-10-01 20:31:06.750486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.543 [2024-10-01 20:31:06.750493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.750529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.543 [2024-10-01 20:31:06.750537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:11.543 [2024-10-01 20:31:06.750546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.543 [2024-10-01 20:31:06.750553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.750601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.543 [2024-10-01 20:31:06.750610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:11.543 [2024-10-01 20:31:06.750619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.543 [2024-10-01 20:31:06.750627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.543 [2024-10-01 20:31:06.750766] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.641 ms, result 0 00:33:11.802 true 00:33:11.802 20:31:06 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 75184 00:33:11.802 20:31:06 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 75184 ']' 00:33:11.802 20:31:06 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 75184 00:33:11.802 20:31:06 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:33:11.802 20:31:06 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:11.802 20:31:06 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75184 00:33:11.802 20:31:06 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:11.802 20:31:06 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:11.802 killing process with pid 75184 00:33:11.802 20:31:06 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75184' 00:33:11.802 20:31:06 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 75184 00:33:11.802 20:31:06 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 75184 00:33:23.997 20:31:17 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:33:26.530 262144+0 records in 00:33:26.530 262144+0 records out 00:33:26.530 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.89679 s, 276 MB/s 00:33:26.530 20:31:21 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:28.425 20:31:23 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:28.425 [2024-10-01 20:31:23.462796] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:33:28.425 [2024-10-01 20:31:23.463077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75410 ] 00:33:28.425 [2024-10-01 20:31:23.609016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.682 [2024-10-01 20:31:23.798533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.246 [2024-10-01 20:31:24.243737] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:29.246 [2024-10-01 20:31:24.243807] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:29.246 [2024-10-01 20:31:24.397141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.246 [2024-10-01 20:31:24.397199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:29.246 [2024-10-01 20:31:24.397212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:29.246 [2024-10-01 20:31:24.397225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.246 [2024-10-01 20:31:24.397273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.246 [2024-10-01 20:31:24.397284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:29.246 [2024-10-01 20:31:24.397292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:33:29.246 [2024-10-01 20:31:24.397299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.246 [2024-10-01 20:31:24.397318] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:29.246 [2024-10-01 20:31:24.398025] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:29.246 [2024-10-01 20:31:24.398048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.246 [2024-10-01 20:31:24.398055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:29.246 [2024-10-01 20:31:24.398064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:33:29.246 [2024-10-01 20:31:24.398071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.246 [2024-10-01 20:31:24.399379] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:29.246 [2024-10-01 20:31:24.412341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.246 [2024-10-01 20:31:24.412392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:29.246 [2024-10-01 20:31:24.412405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.947 ms 00:33:29.246 [2024-10-01 20:31:24.412413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.246 [2024-10-01 20:31:24.412486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.246 [2024-10-01 20:31:24.412496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:29.246 [2024-10-01 20:31:24.412504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:33:29.246 [2024-10-01 20:31:24.412511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.246 [2024-10-01 20:31:24.419179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.246 [2024-10-01 20:31:24.419220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:29.246 [2024-10-01 20:31:24.419230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.594 ms 00:33:29.246 [2024-10-01 20:31:24.419238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.246 [2024-10-01 20:31:24.419311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.246 [2024-10-01 20:31:24.419320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:29.246 [2024-10-01 20:31:24.419328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:33:29.246 [2024-10-01 20:31:24.419335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.246 [2024-10-01 20:31:24.419385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.246 [2024-10-01 20:31:24.419394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:29.246 [2024-10-01 20:31:24.419402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:29.246 [2024-10-01 20:31:24.419409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.246 [2024-10-01 20:31:24.419434] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:29.246 [2024-10-01 20:31:24.422901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.246 [2024-10-01 20:31:24.422944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:29.246 [2024-10-01 20:31:24.422953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.474 ms 00:33:29.246 [2024-10-01 20:31:24.422961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.246 [2024-10-01 20:31:24.422990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.246 [2024-10-01 20:31:24.422998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:29.246 [2024-10-01 20:31:24.423006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:29.246 [2024-10-01 20:31:24.423014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.246 [2024-10-01 20:31:24.423043] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:29.246 [2024-10-01 20:31:24.423062] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:29.246 [2024-10-01 20:31:24.423096] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:29.246 [2024-10-01 20:31:24.423110] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:29.246 [2024-10-01 20:31:24.423212] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:29.246 [2024-10-01 20:31:24.423229] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:29.246 [2024-10-01 20:31:24.423240] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:29.246 [2024-10-01 20:31:24.423252] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:29.246 [2024-10-01 20:31:24.423261] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:29.246 [2024-10-01 20:31:24.423268] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:29.246 [2024-10-01 20:31:24.423276] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:29.246 [2024-10-01 20:31:24.423283] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:29.246 [2024-10-01 20:31:24.423290] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:29.246 [2024-10-01 20:31:24.423298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.246 [2024-10-01 20:31:24.423305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:29.247 [2024-10-01 20:31:24.423313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:33:29.247 [2024-10-01 20:31:24.423319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.247 [2024-10-01 20:31:24.423401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.247 [2024-10-01 20:31:24.423411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:29.247 [2024-10-01 20:31:24.423418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:33:29.247 [2024-10-01 20:31:24.423425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.247 [2024-10-01 20:31:24.423536] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:29.247 [2024-10-01 20:31:24.423553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:29.247 [2024-10-01 20:31:24.423561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:29.247 [2024-10-01 20:31:24.423569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:29.247 [2024-10-01 20:31:24.423576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:29.247 [2024-10-01 20:31:24.423584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:29.247 [2024-10-01 20:31:24.423591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:29.247 [2024-10-01 20:31:24.423598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:29.247 [2024-10-01 20:31:24.423605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:29.247 [2024-10-01 20:31:24.423611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:29.247 [2024-10-01 20:31:24.423618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:29.247 [2024-10-01 20:31:24.423624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:29.247 [2024-10-01 20:31:24.423632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:29.247 [2024-10-01 20:31:24.423645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:29.247 [2024-10-01 20:31:24.423652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:29.247 [2024-10-01 20:31:24.423658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:29.247 [2024-10-01 20:31:24.423665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:29.247 [2024-10-01 20:31:24.423671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:29.247 [2024-10-01 20:31:24.423678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:29.247 [2024-10-01 20:31:24.423684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:29.247 [2024-10-01 20:31:24.423703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:29.247 [2024-10-01 20:31:24.423711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:29.247 [2024-10-01 20:31:24.423717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:29.247 [2024-10-01 20:31:24.423724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:29.247 [2024-10-01 20:31:24.423730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:29.247 [2024-10-01 20:31:24.423737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:29.247 [2024-10-01 20:31:24.423743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:29.247 [2024-10-01 20:31:24.423749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:29.247 [2024-10-01 20:31:24.423756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:29.247 [2024-10-01 20:31:24.423762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:29.247 [2024-10-01 20:31:24.423769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:29.247 [2024-10-01 20:31:24.423775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:29.247 [2024-10-01 20:31:24.423782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:29.247 [2024-10-01 20:31:24.423788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:29.247 [2024-10-01 20:31:24.423795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:29.247 [2024-10-01 20:31:24.423801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:29.247 [2024-10-01 20:31:24.423807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:29.247 [2024-10-01 20:31:24.423814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:29.247 [2024-10-01 20:31:24.423820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:29.247 [2024-10-01 20:31:24.423826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:29.247 [2024-10-01 20:31:24.423833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:29.247 [2024-10-01 20:31:24.423839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:29.247 [2024-10-01 20:31:24.423845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:29.247 [2024-10-01 20:31:24.423851] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:29.247 [2024-10-01 20:31:24.423860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:29.247 [2024-10-01 20:31:24.423869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:29.247 [2024-10-01 20:31:24.423877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:29.247 [2024-10-01 20:31:24.423884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:29.247 [2024-10-01 20:31:24.423891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:29.247 [2024-10-01 20:31:24.423897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:29.247 [2024-10-01 20:31:24.423904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:29.247 [2024-10-01 20:31:24.423910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:29.247 [2024-10-01 20:31:24.423917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:29.247 [2024-10-01 20:31:24.423925] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:29.247 [2024-10-01 20:31:24.423934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:29.247 [2024-10-01 20:31:24.423942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:29.247 [2024-10-01 20:31:24.423949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:29.247 [2024-10-01 20:31:24.423956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:29.247 [2024-10-01 20:31:24.423962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:29.247 [2024-10-01 20:31:24.423969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:29.247 [2024-10-01 20:31:24.423976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:29.247 [2024-10-01 20:31:24.423982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:29.247 [2024-10-01 20:31:24.423989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:29.247 [2024-10-01 20:31:24.423996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:29.247 [2024-10-01 20:31:24.424002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:29.247 [2024-10-01 20:31:24.424009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:29.247 [2024-10-01 20:31:24.424016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:29.247 [2024-10-01 20:31:24.424024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:29.247 [2024-10-01 20:31:24.424031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:29.247 [2024-10-01 20:31:24.424038] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:29.247 [2024-10-01 20:31:24.424045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:29.247 [2024-10-01 20:31:24.424053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:29.247 [2024-10-01 20:31:24.424061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:29.247 [2024-10-01 20:31:24.424068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:29.247 [2024-10-01 20:31:24.424075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:29.247 [2024-10-01 20:31:24.424082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.247 [2024-10-01 20:31:24.424090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:29.247 [2024-10-01 20:31:24.424097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:33:29.247 [2024-10-01 20:31:24.424104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.247 [2024-10-01 20:31:24.450658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.247 [2024-10-01 20:31:24.450715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:29.247 [2024-10-01 20:31:24.450726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.511 ms 00:33:29.247 [2024-10-01 20:31:24.450735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.247 [2024-10-01 20:31:24.450827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.247 [2024-10-01 20:31:24.450836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:29.247 [2024-10-01 20:31:24.450844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:33:29.247 [2024-10-01 20:31:24.450851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.483001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.483047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:29.505 [2024-10-01 20:31:24.483062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.090 ms 00:33:29.505 [2024-10-01 20:31:24.483069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.483109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.483118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:29.505 [2024-10-01 20:31:24.483126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:29.505 [2024-10-01 20:31:24.483133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.483565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.483590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:29.505 [2024-10-01 20:31:24.483599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:33:29.505 [2024-10-01 20:31:24.483610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.483748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.483763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:29.505 [2024-10-01 20:31:24.483771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:33:29.505 [2024-10-01 20:31:24.483778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.496811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.496846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:29.505 [2024-10-01 20:31:24.496856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.014 ms 00:33:29.505 [2024-10-01 20:31:24.496864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.509392] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:29.505 [2024-10-01 20:31:24.509435] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:29.505 [2024-10-01 20:31:24.509445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.509453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:29.505 [2024-10-01 20:31:24.509462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.461 ms 00:33:29.505 [2024-10-01 20:31:24.509469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.533784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.533852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:29.505 [2024-10-01 20:31:24.533864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.271 ms 00:33:29.505 [2024-10-01 20:31:24.533871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.545858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.545899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:29.505 [2024-10-01 20:31:24.545910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.923 ms 00:33:29.505 [2024-10-01 20:31:24.545917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.557223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.557270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:29.505 [2024-10-01 20:31:24.557281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.269 ms 00:33:29.505 [2024-10-01 20:31:24.557289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.557955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.557979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:29.505 [2024-10-01 20:31:24.557989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:33:29.505 [2024-10-01 20:31:24.557997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.613626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.613677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:29.505 [2024-10-01 20:31:24.613705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.612 ms 00:33:29.505 [2024-10-01 20:31:24.613713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.624393] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:29.505 [2024-10-01 20:31:24.627465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.627496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:29.505 [2024-10-01 20:31:24.627507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.696 ms 00:33:29.505 [2024-10-01 20:31:24.627517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.627614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.627625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:29.505 [2024-10-01 20:31:24.627633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:29.505 [2024-10-01 20:31:24.627641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.627714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.627730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:29.505 [2024-10-01 20:31:24.627739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:29.505 [2024-10-01 20:31:24.627746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.627764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.505 [2024-10-01 20:31:24.627775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:29.505 [2024-10-01 20:31:24.627783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:29.505 [2024-10-01 20:31:24.627790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.505 [2024-10-01 20:31:24.627817] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:29.505 [2024-10-01 20:31:24.627826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.506 [2024-10-01 20:31:24.627833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:29.506 [2024-10-01 20:31:24.627841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:29.506 [2024-10-01 20:31:24.627851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.506 [2024-10-01 20:31:24.650988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.506 [2024-10-01 20:31:24.651027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:29.506 [2024-10-01 20:31:24.651039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.121 ms 00:33:29.506 [2024-10-01 20:31:24.651048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.506 [2024-10-01 20:31:24.651124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.506 [2024-10-01 20:31:24.651134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:29.506 [2024-10-01 20:31:24.651143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:33:29.506 [2024-10-01 20:31:24.651150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.506 [2024-10-01 20:31:24.652182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 254.650 ms, result 0 00:33:53.111  Copying: 42/1024 [MB] (42 MBps) Copying: 87/1024 [MB] (45 MBps) Copying: 130/1024 [MB] (43 MBps) Copying: 173/1024 [MB] (43 MBps) Copying: 215/1024 [MB] (41 MBps) Copying: 258/1024 [MB] (42 MBps) Copying: 301/1024 [MB] (42 MBps) Copying: 345/1024 [MB] (44 MBps) Copying: 388/1024 [MB] (43 MBps) Copying: 431/1024 [MB] (43 MBps) Copying: 473/1024 [MB] (41 MBps) Copying: 504/1024 [MB] (30 MBps) Copying: 548/1024 [MB] (44 MBps) Copying: 593/1024 [MB] (44 MBps) Copying: 636/1024 [MB] (42 MBps) Copying: 680/1024 [MB] (44 MBps) Copying: 724/1024 [MB] (43 MBps) Copying: 769/1024 [MB] (45 MBps) Copying: 814/1024 [MB] (45 MBps) Copying: 859/1024 [MB] (44 MBps) Copying: 903/1024 [MB] (44 MBps) Copying: 955/1024 [MB] (51 MBps) Copying: 1000/1024 [MB] (45 MBps) Copying: 1024/1024 [MB] (average 43 MBps)[2024-10-01 20:31:48.186431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.111 [2024-10-01 20:31:48.186484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:53.111 [2024-10-01 20:31:48.186498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:53.111 [2024-10-01 20:31:48.186507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.111 [2024-10-01 20:31:48.186532] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:53.111 [2024-10-01 20:31:48.189211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.111 [2024-10-01 20:31:48.189252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:53.111 [2024-10-01 20:31:48.189266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.665 ms 00:33:53.111 [2024-10-01 20:31:48.189276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.111 [2024-10-01 20:31:48.190788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.111 [2024-10-01 20:31:48.190818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:53.111 [2024-10-01 20:31:48.190828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.487 ms 00:33:53.111 [2024-10-01 20:31:48.190836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.111 [2024-10-01 20:31:48.203682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.111 [2024-10-01 20:31:48.203741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:53.111 [2024-10-01 20:31:48.203753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.828 ms 00:33:53.111 [2024-10-01 20:31:48.203761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.111 [2024-10-01 20:31:48.209922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.111 [2024-10-01 20:31:48.209950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:53.111 [2024-10-01 20:31:48.209961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.124 ms 00:33:53.112 [2024-10-01 20:31:48.209969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.112 [2024-10-01 20:31:48.234392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.112 [2024-10-01 20:31:48.234436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:53.112 [2024-10-01 20:31:48.234448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.357 ms 00:33:53.112 [2024-10-01 20:31:48.234455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.112 [2024-10-01 20:31:48.249374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.112 [2024-10-01 20:31:48.249436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:53.112 [2024-10-01 20:31:48.249457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.865 ms 00:33:53.112 [2024-10-01 20:31:48.249465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.112 [2024-10-01 20:31:48.249616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.112 [2024-10-01 20:31:48.249625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:53.112 [2024-10-01 20:31:48.249634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:33:53.112 [2024-10-01 20:31:48.249642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.112 [2024-10-01 20:31:48.274057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.112 [2024-10-01 20:31:48.274101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:53.112 [2024-10-01 20:31:48.274112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.401 ms 00:33:53.112 [2024-10-01 20:31:48.274121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.112 [2024-10-01 20:31:48.296494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.112 [2024-10-01 20:31:48.296535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:53.112 [2024-10-01 20:31:48.296547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.333 ms 00:33:53.112 [2024-10-01 20:31:48.296555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.112 [2024-10-01 20:31:48.318564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.112 [2024-10-01 20:31:48.318606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:53.112 [2024-10-01 20:31:48.318617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.970 ms 00:33:53.112 [2024-10-01 20:31:48.318625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.371 [2024-10-01 20:31:48.348838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.371 [2024-10-01 20:31:48.348923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:53.371 [2024-10-01 20:31:48.348943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.122 ms 00:33:53.371 [2024-10-01 20:31:48.348957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.371 [2024-10-01 20:31:48.349077] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:53.371 [2024-10-01 20:31:48.349112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.349997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.350011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.350025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.350038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.350051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.350064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.350079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.350092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:53.371 [2024-10-01 20:31:48.350105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:53.372 [2024-10-01 20:31:48.350521] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:53.372 [2024-10-01 20:31:48.350536] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73992bbb-6107-4f6f-8507-ce356e0255e3 00:33:53.372 [2024-10-01 20:31:48.350550] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:53.372 [2024-10-01 20:31:48.350563] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:53.372 [2024-10-01 20:31:48.350575] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:53.372 [2024-10-01 20:31:48.350586] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:53.372 [2024-10-01 20:31:48.350598] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:53.372 [2024-10-01 20:31:48.350611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:53.372 [2024-10-01 20:31:48.350632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:53.372 [2024-10-01 20:31:48.350644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:53.372 [2024-10-01 20:31:48.350656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:53.372 [2024-10-01 20:31:48.350669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.372 [2024-10-01 20:31:48.350683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:53.372 [2024-10-01 20:31:48.350725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.594 ms 00:33:53.372 [2024-10-01 20:31:48.350736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.372 [2024-10-01 20:31:48.370941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.372 [2024-10-01 20:31:48.371006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:53.372 [2024-10-01 20:31:48.371026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.144 ms 00:33:53.372 [2024-10-01 20:31:48.371041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.372 [2024-10-01 20:31:48.371597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.372 [2024-10-01 20:31:48.371624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:53.372 [2024-10-01 20:31:48.371639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:33:53.372 [2024-10-01 20:31:48.371651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.372 [2024-10-01 20:31:48.416872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.372 [2024-10-01 20:31:48.416947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:53.372 [2024-10-01 20:31:48.416965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.372 [2024-10-01 20:31:48.416983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.372 [2024-10-01 20:31:48.417078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.372 [2024-10-01 20:31:48.417092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:53.372 [2024-10-01 20:31:48.417105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.372 [2024-10-01 20:31:48.417116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.372 [2024-10-01 20:31:48.417208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.372 [2024-10-01 20:31:48.417222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:53.372 [2024-10-01 20:31:48.417236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.372 [2024-10-01 20:31:48.417247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.372 [2024-10-01 20:31:48.417275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.372 [2024-10-01 20:31:48.417287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:53.372 [2024-10-01 20:31:48.417299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.372 [2024-10-01 20:31:48.417311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.372 [2024-10-01 20:31:48.528328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.372 [2024-10-01 20:31:48.528386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:53.372 [2024-10-01 20:31:48.528401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.372 [2024-10-01 20:31:48.528410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.631 [2024-10-01 20:31:48.591591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.631 [2024-10-01 20:31:48.591641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:53.631 [2024-10-01 20:31:48.591653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.631 [2024-10-01 20:31:48.591661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.631 [2024-10-01 20:31:48.591751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.631 [2024-10-01 20:31:48.591763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:53.631 [2024-10-01 20:31:48.591779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.631 [2024-10-01 20:31:48.591787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.631 [2024-10-01 20:31:48.591820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.631 [2024-10-01 20:31:48.591832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:53.631 [2024-10-01 20:31:48.591840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.631 [2024-10-01 20:31:48.591847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.631 [2024-10-01 20:31:48.591932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.631 [2024-10-01 20:31:48.591941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:53.631 [2024-10-01 20:31:48.591949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.631 [2024-10-01 20:31:48.591957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.631 [2024-10-01 20:31:48.591986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.631 [2024-10-01 20:31:48.591995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:53.631 [2024-10-01 20:31:48.592006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.631 [2024-10-01 20:31:48.592013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.631 [2024-10-01 20:31:48.592047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.631 [2024-10-01 20:31:48.592055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:53.631 [2024-10-01 20:31:48.592063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.631 [2024-10-01 20:31:48.592069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.631 [2024-10-01 20:31:48.592108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.631 [2024-10-01 20:31:48.592119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:53.631 [2024-10-01 20:31:48.592127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.631 [2024-10-01 20:31:48.592135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.631 [2024-10-01 20:31:48.592239] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 405.793 ms, result 0 00:33:56.154 00:33:56.154 00:33:56.154 20:31:50 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:33:56.154 [2024-10-01 20:31:50.881725] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:33:56.154 [2024-10-01 20:31:50.881902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75690 ] 00:33:56.154 [2024-10-01 20:31:51.034933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.154 [2024-10-01 20:31:51.273706] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.722 [2024-10-01 20:31:51.715884] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:56.722 [2024-10-01 20:31:51.715955] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:56.722 [2024-10-01 20:31:51.869174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.722 [2024-10-01 20:31:51.869223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:56.722 [2024-10-01 20:31:51.869236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:56.722 [2024-10-01 20:31:51.869247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.722 [2024-10-01 20:31:51.869294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.722 [2024-10-01 20:31:51.869304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:56.722 [2024-10-01 20:31:51.869312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:33:56.722 [2024-10-01 20:31:51.869320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.722 [2024-10-01 20:31:51.869339] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:56.722 [2024-10-01 20:31:51.870036] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:56.722 [2024-10-01 20:31:51.870052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.722 [2024-10-01 20:31:51.870059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:56.722 [2024-10-01 20:31:51.870068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:33:56.722 [2024-10-01 20:31:51.870075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.722 [2024-10-01 20:31:51.871330] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:56.722 [2024-10-01 20:31:51.883725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.722 [2024-10-01 20:31:51.883762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:56.722 [2024-10-01 20:31:51.883775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.397 ms 00:33:56.722 [2024-10-01 20:31:51.883783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.722 [2024-10-01 20:31:51.883847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.722 [2024-10-01 20:31:51.883856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:56.722 [2024-10-01 20:31:51.883864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:33:56.722 [2024-10-01 20:31:51.883873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.722 [2024-10-01 20:31:51.890140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.722 [2024-10-01 20:31:51.890171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:56.722 [2024-10-01 20:31:51.890181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.208 ms 00:33:56.722 [2024-10-01 20:31:51.890188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.722 [2024-10-01 20:31:51.890256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.722 [2024-10-01 20:31:51.890265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:56.722 [2024-10-01 20:31:51.890273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:33:56.722 [2024-10-01 20:31:51.890280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.722 [2024-10-01 20:31:51.890324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.722 [2024-10-01 20:31:51.890333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:56.722 [2024-10-01 20:31:51.890341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:56.722 [2024-10-01 20:31:51.890349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.722 [2024-10-01 20:31:51.890370] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:56.722 [2024-10-01 20:31:51.893802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.722 [2024-10-01 20:31:51.893831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:56.722 [2024-10-01 20:31:51.893840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.437 ms 00:33:56.722 [2024-10-01 20:31:51.893848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.722 [2024-10-01 20:31:51.893884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.722 [2024-10-01 20:31:51.893891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:56.722 [2024-10-01 20:31:51.893899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:56.722 [2024-10-01 20:31:51.893906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.722 [2024-10-01 20:31:51.893934] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:56.722 [2024-10-01 20:31:51.893952] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:56.722 [2024-10-01 20:31:51.893986] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:56.722 [2024-10-01 20:31:51.894000] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:56.722 [2024-10-01 20:31:51.894101] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:56.722 [2024-10-01 20:31:51.894111] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:56.722 [2024-10-01 20:31:51.894122] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:56.722 [2024-10-01 20:31:51.894134] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:56.722 [2024-10-01 20:31:51.894142] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:56.722 [2024-10-01 20:31:51.894150] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:56.722 [2024-10-01 20:31:51.894156] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:56.722 [2024-10-01 20:31:51.894164] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:56.722 [2024-10-01 20:31:51.894171] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:56.722 [2024-10-01 20:31:51.894178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.722 [2024-10-01 20:31:51.894185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:56.722 [2024-10-01 20:31:51.894193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:33:56.722 [2024-10-01 20:31:51.894200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.722 [2024-10-01 20:31:51.894281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.722 [2024-10-01 20:31:51.894292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:56.722 [2024-10-01 20:31:51.894299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:33:56.722 [2024-10-01 20:31:51.894305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.722 [2024-10-01 20:31:51.894416] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:56.722 [2024-10-01 20:31:51.894427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:56.722 [2024-10-01 20:31:51.894435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:56.722 [2024-10-01 20:31:51.894442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:56.722 [2024-10-01 20:31:51.894450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:56.722 [2024-10-01 20:31:51.894457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:56.722 [2024-10-01 20:31:51.894463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:56.722 [2024-10-01 20:31:51.894470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:56.722 [2024-10-01 20:31:51.894476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:56.722 [2024-10-01 20:31:51.894482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:56.722 [2024-10-01 20:31:51.894489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:56.722 [2024-10-01 20:31:51.894496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:56.722 [2024-10-01 20:31:51.894502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:56.722 [2024-10-01 20:31:51.894514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:56.722 [2024-10-01 20:31:51.894521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:56.722 [2024-10-01 20:31:51.894527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:56.722 [2024-10-01 20:31:51.894533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:56.722 [2024-10-01 20:31:51.894541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:56.722 [2024-10-01 20:31:51.894547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:56.722 [2024-10-01 20:31:51.894554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:56.722 [2024-10-01 20:31:51.894560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:56.722 [2024-10-01 20:31:51.894566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:56.722 [2024-10-01 20:31:51.894573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:56.722 [2024-10-01 20:31:51.894579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:56.722 [2024-10-01 20:31:51.894585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:56.722 [2024-10-01 20:31:51.894591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:56.722 [2024-10-01 20:31:51.894597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:56.722 [2024-10-01 20:31:51.894603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:56.722 [2024-10-01 20:31:51.894610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:56.722 [2024-10-01 20:31:51.894616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:56.722 [2024-10-01 20:31:51.894622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:56.722 [2024-10-01 20:31:51.894629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:56.722 [2024-10-01 20:31:51.894635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:56.722 [2024-10-01 20:31:51.894641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:56.723 [2024-10-01 20:31:51.894647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:56.723 [2024-10-01 20:31:51.894653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:56.723 [2024-10-01 20:31:51.894660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:56.723 [2024-10-01 20:31:51.894668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:56.723 [2024-10-01 20:31:51.894674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:56.723 [2024-10-01 20:31:51.894680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:56.723 [2024-10-01 20:31:51.894686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:56.723 [2024-10-01 20:31:51.894705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:56.723 [2024-10-01 20:31:51.894711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:56.723 [2024-10-01 20:31:51.894718] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:56.723 [2024-10-01 20:31:51.894731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:56.723 [2024-10-01 20:31:51.894740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:56.723 [2024-10-01 20:31:51.894748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:56.723 [2024-10-01 20:31:51.894755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:56.723 [2024-10-01 20:31:51.894762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:56.723 [2024-10-01 20:31:51.894770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:56.723 [2024-10-01 20:31:51.894777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:56.723 [2024-10-01 20:31:51.894783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:56.723 [2024-10-01 20:31:51.894790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:56.723 [2024-10-01 20:31:51.894798] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:56.723 [2024-10-01 20:31:51.894807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:56.723 [2024-10-01 20:31:51.894815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:56.723 [2024-10-01 20:31:51.894822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:56.723 [2024-10-01 20:31:51.894830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:56.723 [2024-10-01 20:31:51.894837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:56.723 [2024-10-01 20:31:51.894844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:56.723 [2024-10-01 20:31:51.894851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:56.723 [2024-10-01 20:31:51.894858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:56.723 [2024-10-01 20:31:51.894865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:56.723 [2024-10-01 20:31:51.894872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:56.723 [2024-10-01 20:31:51.894879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:56.723 [2024-10-01 20:31:51.894886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:56.723 [2024-10-01 20:31:51.894893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:56.723 [2024-10-01 20:31:51.894900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:56.723 [2024-10-01 20:31:51.894908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:56.723 [2024-10-01 20:31:51.894915] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:56.723 [2024-10-01 20:31:51.894923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:56.723 [2024-10-01 20:31:51.894931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:56.723 [2024-10-01 20:31:51.894938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:56.723 [2024-10-01 20:31:51.894945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:56.723 [2024-10-01 20:31:51.894952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:56.723 [2024-10-01 20:31:51.894958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.723 [2024-10-01 20:31:51.894965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:56.723 [2024-10-01 20:31:51.894973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:33:56.723 [2024-10-01 20:31:51.894979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.723 [2024-10-01 20:31:51.921585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.723 [2024-10-01 20:31:51.921626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:56.723 [2024-10-01 20:31:51.921638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.562 ms 00:33:56.723 [2024-10-01 20:31:51.921646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.723 [2024-10-01 20:31:51.921749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.723 [2024-10-01 20:31:51.921759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:56.723 [2024-10-01 20:31:51.921767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:33:56.723 [2024-10-01 20:31:51.921774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.982 [2024-10-01 20:31:51.953250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.982 [2024-10-01 20:31:51.953288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:56.982 [2024-10-01 20:31:51.953301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.421 ms 00:33:56.982 [2024-10-01 20:31:51.953308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.982 [2024-10-01 20:31:51.953347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.982 [2024-10-01 20:31:51.953355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:56.982 [2024-10-01 20:31:51.953363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:56.982 [2024-10-01 20:31:51.953370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.982 [2024-10-01 20:31:51.953797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.982 [2024-10-01 20:31:51.953821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:56.982 [2024-10-01 20:31:51.953831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:33:56.982 [2024-10-01 20:31:51.953842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.982 [2024-10-01 20:31:51.953970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.982 [2024-10-01 20:31:51.953979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:56.982 [2024-10-01 20:31:51.953987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:33:56.982 [2024-10-01 20:31:51.953995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.982 [2024-10-01 20:31:51.966950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.982 [2024-10-01 20:31:51.966984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:56.982 [2024-10-01 20:31:51.966994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.936 ms 00:33:56.982 [2024-10-01 20:31:51.967001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.982 [2024-10-01 20:31:51.979028] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:56.982 [2024-10-01 20:31:51.979063] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:56.982 [2024-10-01 20:31:51.979075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.982 [2024-10-01 20:31:51.979083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:56.982 [2024-10-01 20:31:51.979091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.964 ms 00:33:56.982 [2024-10-01 20:31:51.979098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.982 [2024-10-01 20:31:52.003402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.982 [2024-10-01 20:31:52.003453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:56.982 [2024-10-01 20:31:52.003465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.266 ms 00:33:56.982 [2024-10-01 20:31:52.003473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.982 [2024-10-01 20:31:52.014833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.982 [2024-10-01 20:31:52.014872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:56.982 [2024-10-01 20:31:52.014884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.313 ms 00:33:56.982 [2024-10-01 20:31:52.014891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.982 [2024-10-01 20:31:52.026069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.982 [2024-10-01 20:31:52.026109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:56.983 [2024-10-01 20:31:52.026119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.133 ms 00:33:56.983 [2024-10-01 20:31:52.026127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.983 [2024-10-01 20:31:52.026774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.983 [2024-10-01 20:31:52.026798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:56.983 [2024-10-01 20:31:52.026807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:33:56.983 [2024-10-01 20:31:52.026815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.983 [2024-10-01 20:31:52.082525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.983 [2024-10-01 20:31:52.082578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:56.983 [2024-10-01 20:31:52.082590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.693 ms 00:33:56.983 [2024-10-01 20:31:52.082598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.983 [2024-10-01 20:31:52.093409] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:56.983 [2024-10-01 20:31:52.096060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.983 [2024-10-01 20:31:52.096092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:56.983 [2024-10-01 20:31:52.096104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.415 ms 00:33:56.983 [2024-10-01 20:31:52.096116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.983 [2024-10-01 20:31:52.096209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.983 [2024-10-01 20:31:52.096220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:56.983 [2024-10-01 20:31:52.096228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:56.983 [2024-10-01 20:31:52.096235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.983 [2024-10-01 20:31:52.096298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.983 [2024-10-01 20:31:52.096307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:56.983 [2024-10-01 20:31:52.096316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:56.983 [2024-10-01 20:31:52.096322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.983 [2024-10-01 20:31:52.096342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.983 [2024-10-01 20:31:52.096350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:56.983 [2024-10-01 20:31:52.096358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:56.983 [2024-10-01 20:31:52.096365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.983 [2024-10-01 20:31:52.096393] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:56.983 [2024-10-01 20:31:52.096402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.983 [2024-10-01 20:31:52.096410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:56.983 [2024-10-01 20:31:52.096420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:56.983 [2024-10-01 20:31:52.096427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.983 [2024-10-01 20:31:52.119248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.983 [2024-10-01 20:31:52.119290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:56.983 [2024-10-01 20:31:52.119302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.804 ms 00:33:56.983 [2024-10-01 20:31:52.119310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.983 [2024-10-01 20:31:52.119385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:56.983 [2024-10-01 20:31:52.119395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:56.983 [2024-10-01 20:31:52.119404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:56.983 [2024-10-01 20:31:52.119411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:56.983 [2024-10-01 20:31:52.120290] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 250.729 ms, result 0 00:34:19.550  Copying: 43/1024 [MB] (43 MBps) Copying: 87/1024 [MB] (43 MBps) Copying: 133/1024 [MB] (46 MBps) Copying: 180/1024 [MB] (46 MBps) Copying: 227/1024 [MB] (47 MBps) Copying: 273/1024 [MB] (46 MBps) Copying: 320/1024 [MB] (46 MBps) Copying: 366/1024 [MB] (46 MBps) Copying: 412/1024 [MB] (45 MBps) Copying: 458/1024 [MB] (45 MBps) Copying: 501/1024 [MB] (43 MBps) Copying: 546/1024 [MB] (45 MBps) Copying: 590/1024 [MB] (43 MBps) Copying: 637/1024 [MB] (47 MBps) Copying: 680/1024 [MB] (42 MBps) Copying: 725/1024 [MB] (44 MBps) Copying: 771/1024 [MB] (45 MBps) Copying: 818/1024 [MB] (47 MBps) Copying: 866/1024 [MB] (47 MBps) Copying: 912/1024 [MB] (46 MBps) Copying: 960/1024 [MB] (47 MBps) Copying: 1008/1024 [MB] (47 MBps) Copying: 1024/1024 [MB] (average 45 MBps)[2024-10-01 20:32:14.643636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.550 [2024-10-01 20:32:14.643714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:19.550 [2024-10-01 20:32:14.643732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:19.550 [2024-10-01 20:32:14.643750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.550 [2024-10-01 20:32:14.643772] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:19.550 [2024-10-01 20:32:14.646475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.550 [2024-10-01 20:32:14.646515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:19.550 [2024-10-01 20:32:14.646532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.687 ms 00:34:19.550 [2024-10-01 20:32:14.646541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.550 [2024-10-01 20:32:14.646773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.550 [2024-10-01 20:32:14.646784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:19.550 [2024-10-01 20:32:14.646792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:34:19.550 [2024-10-01 20:32:14.646799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.550 [2024-10-01 20:32:14.651294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.550 [2024-10-01 20:32:14.651326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:19.550 [2024-10-01 20:32:14.651337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.477 ms 00:34:19.550 [2024-10-01 20:32:14.651346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.550 [2024-10-01 20:32:14.657707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.550 [2024-10-01 20:32:14.657740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:19.550 [2024-10-01 20:32:14.657750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.342 ms 00:34:19.550 [2024-10-01 20:32:14.657759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.550 [2024-10-01 20:32:14.682337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.550 [2024-10-01 20:32:14.682387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:19.550 [2024-10-01 20:32:14.682398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.517 ms 00:34:19.550 [2024-10-01 20:32:14.682406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.550 [2024-10-01 20:32:14.698452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.550 [2024-10-01 20:32:14.698503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:19.550 [2024-10-01 20:32:14.698516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.020 ms 00:34:19.550 [2024-10-01 20:32:14.698524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.550 [2024-10-01 20:32:14.698660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.550 [2024-10-01 20:32:14.698670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:19.550 [2024-10-01 20:32:14.698678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:34:19.550 [2024-10-01 20:32:14.698686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.550 [2024-10-01 20:32:14.721464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.550 [2024-10-01 20:32:14.721504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:19.550 [2024-10-01 20:32:14.721517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.746 ms 00:34:19.550 [2024-10-01 20:32:14.721525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.550 [2024-10-01 20:32:14.743955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.550 [2024-10-01 20:32:14.743992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:19.550 [2024-10-01 20:32:14.744002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.409 ms 00:34:19.550 [2024-10-01 20:32:14.744010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.809 [2024-10-01 20:32:14.765925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.809 [2024-10-01 20:32:14.765979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:19.809 [2024-10-01 20:32:14.765989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.896 ms 00:34:19.809 [2024-10-01 20:32:14.765997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.809 [2024-10-01 20:32:14.787909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.809 [2024-10-01 20:32:14.787961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:19.809 [2024-10-01 20:32:14.787972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.867 ms 00:34:19.809 [2024-10-01 20:32:14.787980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.809 [2024-10-01 20:32:14.788007] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:19.809 [2024-10-01 20:32:14.788022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:19.809 [2024-10-01 20:32:14.788425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:19.810 [2024-10-01 20:32:14.788793] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:19.810 [2024-10-01 20:32:14.788800] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73992bbb-6107-4f6f-8507-ce356e0255e3 00:34:19.810 [2024-10-01 20:32:14.788809] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:19.810 [2024-10-01 20:32:14.788816] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:19.810 [2024-10-01 20:32:14.788823] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:19.810 [2024-10-01 20:32:14.788833] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:19.810 [2024-10-01 20:32:14.788843] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:19.810 [2024-10-01 20:32:14.788860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:19.810 [2024-10-01 20:32:14.788872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:19.810 [2024-10-01 20:32:14.788883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:19.810 [2024-10-01 20:32:14.788890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:19.810 [2024-10-01 20:32:14.788897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.810 [2024-10-01 20:32:14.788911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:19.810 [2024-10-01 20:32:14.788919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:34:19.810 [2024-10-01 20:32:14.788927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.810 [2024-10-01 20:32:14.801515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.810 [2024-10-01 20:32:14.801561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:19.810 [2024-10-01 20:32:14.801573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.569 ms 00:34:19.810 [2024-10-01 20:32:14.801587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.810 [2024-10-01 20:32:14.801960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.810 [2024-10-01 20:32:14.801987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:19.810 [2024-10-01 20:32:14.801997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:34:19.810 [2024-10-01 20:32:14.802004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.810 [2024-10-01 20:32:14.831369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.810 [2024-10-01 20:32:14.831422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:19.810 [2024-10-01 20:32:14.831435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.810 [2024-10-01 20:32:14.831449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.810 [2024-10-01 20:32:14.831517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.810 [2024-10-01 20:32:14.831527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:19.810 [2024-10-01 20:32:14.831534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.810 [2024-10-01 20:32:14.831542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.810 [2024-10-01 20:32:14.831610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.810 [2024-10-01 20:32:14.831621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:19.810 [2024-10-01 20:32:14.831628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.810 [2024-10-01 20:32:14.831636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.810 [2024-10-01 20:32:14.831654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.810 [2024-10-01 20:32:14.831662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:19.810 [2024-10-01 20:32:14.831669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.810 [2024-10-01 20:32:14.831677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.810 [2024-10-01 20:32:14.911182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.810 [2024-10-01 20:32:14.911236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:19.810 [2024-10-01 20:32:14.911249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.810 [2024-10-01 20:32:14.911264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.810 [2024-10-01 20:32:14.974427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.810 [2024-10-01 20:32:14.974480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:19.810 [2024-10-01 20:32:14.974493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.810 [2024-10-01 20:32:14.974501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.810 [2024-10-01 20:32:14.974570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.810 [2024-10-01 20:32:14.974579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:19.810 [2024-10-01 20:32:14.974587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.810 [2024-10-01 20:32:14.974595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.810 [2024-10-01 20:32:14.974632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.811 [2024-10-01 20:32:14.974640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:19.811 [2024-10-01 20:32:14.974648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.811 [2024-10-01 20:32:14.974656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.811 [2024-10-01 20:32:14.974755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.811 [2024-10-01 20:32:14.974766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:19.811 [2024-10-01 20:32:14.974774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.811 [2024-10-01 20:32:14.974781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.811 [2024-10-01 20:32:14.974810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.811 [2024-10-01 20:32:14.974823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:19.811 [2024-10-01 20:32:14.974831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.811 [2024-10-01 20:32:14.974839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.811 [2024-10-01 20:32:14.974869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.811 [2024-10-01 20:32:14.974887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:19.811 [2024-10-01 20:32:14.974895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.811 [2024-10-01 20:32:14.974902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.811 [2024-10-01 20:32:14.974941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:19.811 [2024-10-01 20:32:14.974963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:19.811 [2024-10-01 20:32:14.974971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:19.811 [2024-10-01 20:32:14.974979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.811 [2024-10-01 20:32:14.975093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.432 ms, result 0 00:34:21.183 00:34:21.183 00:34:21.183 20:32:16 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:23.710 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:23.710 20:32:18 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:34:23.710 [2024-10-01 20:32:18.399461] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:34:23.710 [2024-10-01 20:32:18.399596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75977 ] 00:34:23.710 [2024-10-01 20:32:18.551020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:23.710 [2024-10-01 20:32:18.747197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.277 [2024-10-01 20:32:19.192027] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:24.277 [2024-10-01 20:32:19.192095] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:24.277 [2024-10-01 20:32:19.344503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.277 [2024-10-01 20:32:19.344566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:24.277 [2024-10-01 20:32:19.344580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:24.277 [2024-10-01 20:32:19.344593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.277 [2024-10-01 20:32:19.344646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.277 [2024-10-01 20:32:19.344657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:24.277 [2024-10-01 20:32:19.344665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:34:24.277 [2024-10-01 20:32:19.344673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.277 [2024-10-01 20:32:19.344707] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:24.277 [2024-10-01 20:32:19.345386] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:24.277 [2024-10-01 20:32:19.345416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.277 [2024-10-01 20:32:19.345424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:24.277 [2024-10-01 20:32:19.345433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:34:24.277 [2024-10-01 20:32:19.345441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.277 [2024-10-01 20:32:19.346800] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:24.277 [2024-10-01 20:32:19.359775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.277 [2024-10-01 20:32:19.359816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:24.277 [2024-10-01 20:32:19.359829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.976 ms 00:34:24.277 [2024-10-01 20:32:19.359837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.277 [2024-10-01 20:32:19.359896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.277 [2024-10-01 20:32:19.359905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:24.277 [2024-10-01 20:32:19.359913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:34:24.277 [2024-10-01 20:32:19.359921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.277 [2024-10-01 20:32:19.366820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.277 [2024-10-01 20:32:19.366858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:24.277 [2024-10-01 20:32:19.366869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.841 ms 00:34:24.277 [2024-10-01 20:32:19.366876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.277 [2024-10-01 20:32:19.366963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.277 [2024-10-01 20:32:19.366973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:24.277 [2024-10-01 20:32:19.366981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:34:24.277 [2024-10-01 20:32:19.366989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.277 [2024-10-01 20:32:19.367036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.277 [2024-10-01 20:32:19.367046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:24.277 [2024-10-01 20:32:19.367054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:24.277 [2024-10-01 20:32:19.367061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.277 [2024-10-01 20:32:19.367083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:24.277 [2024-10-01 20:32:19.370640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.277 [2024-10-01 20:32:19.370671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:24.277 [2024-10-01 20:32:19.370680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.563 ms 00:34:24.277 [2024-10-01 20:32:19.370688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.277 [2024-10-01 20:32:19.370728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.277 [2024-10-01 20:32:19.370737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:24.277 [2024-10-01 20:32:19.370745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:24.277 [2024-10-01 20:32:19.370752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.277 [2024-10-01 20:32:19.370782] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:24.277 [2024-10-01 20:32:19.370801] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:24.277 [2024-10-01 20:32:19.370836] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:24.277 [2024-10-01 20:32:19.370850] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:24.277 [2024-10-01 20:32:19.370953] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:24.277 [2024-10-01 20:32:19.370980] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:24.277 [2024-10-01 20:32:19.370991] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:24.277 [2024-10-01 20:32:19.371004] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:24.277 [2024-10-01 20:32:19.371013] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:24.277 [2024-10-01 20:32:19.371023] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:24.278 [2024-10-01 20:32:19.371030] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:24.278 [2024-10-01 20:32:19.371037] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:24.278 [2024-10-01 20:32:19.371045] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:24.278 [2024-10-01 20:32:19.371052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.278 [2024-10-01 20:32:19.371060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:24.278 [2024-10-01 20:32:19.371067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:34:24.278 [2024-10-01 20:32:19.371074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.278 [2024-10-01 20:32:19.371157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.278 [2024-10-01 20:32:19.371173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:24.278 [2024-10-01 20:32:19.371181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:24.278 [2024-10-01 20:32:19.371188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.278 [2024-10-01 20:32:19.371302] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:24.278 [2024-10-01 20:32:19.371319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:24.278 [2024-10-01 20:32:19.371328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:24.278 [2024-10-01 20:32:19.371336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:24.278 [2024-10-01 20:32:19.371344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:24.278 [2024-10-01 20:32:19.371351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:24.278 [2024-10-01 20:32:19.371358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:24.278 [2024-10-01 20:32:19.371365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:24.278 [2024-10-01 20:32:19.371372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:24.278 [2024-10-01 20:32:19.371378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:24.278 [2024-10-01 20:32:19.371385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:24.278 [2024-10-01 20:32:19.371391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:24.278 [2024-10-01 20:32:19.371398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:24.278 [2024-10-01 20:32:19.371410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:24.278 [2024-10-01 20:32:19.371419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:24.278 [2024-10-01 20:32:19.371426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:24.278 [2024-10-01 20:32:19.371432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:24.278 [2024-10-01 20:32:19.371439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:24.278 [2024-10-01 20:32:19.371445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:24.278 [2024-10-01 20:32:19.371452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:24.278 [2024-10-01 20:32:19.371459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:24.278 [2024-10-01 20:32:19.371465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:24.278 [2024-10-01 20:32:19.371471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:24.278 [2024-10-01 20:32:19.371477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:24.278 [2024-10-01 20:32:19.371484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:24.278 [2024-10-01 20:32:19.371490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:24.278 [2024-10-01 20:32:19.371496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:24.278 [2024-10-01 20:32:19.371503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:24.278 [2024-10-01 20:32:19.371509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:24.278 [2024-10-01 20:32:19.371516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:24.278 [2024-10-01 20:32:19.371522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:24.278 [2024-10-01 20:32:19.371529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:24.278 [2024-10-01 20:32:19.371535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:24.278 [2024-10-01 20:32:19.371541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:24.278 [2024-10-01 20:32:19.371547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:24.278 [2024-10-01 20:32:19.371553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:24.278 [2024-10-01 20:32:19.371559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:24.278 [2024-10-01 20:32:19.371566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:24.278 [2024-10-01 20:32:19.371572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:24.278 [2024-10-01 20:32:19.371578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:24.278 [2024-10-01 20:32:19.371585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:24.278 [2024-10-01 20:32:19.371591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:24.278 [2024-10-01 20:32:19.371597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:24.278 [2024-10-01 20:32:19.371603] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:24.278 [2024-10-01 20:32:19.371610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:24.278 [2024-10-01 20:32:19.371620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:24.278 [2024-10-01 20:32:19.371628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:24.278 [2024-10-01 20:32:19.371635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:24.278 [2024-10-01 20:32:19.371642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:24.278 [2024-10-01 20:32:19.371648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:24.278 [2024-10-01 20:32:19.371655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:24.278 [2024-10-01 20:32:19.371661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:24.278 [2024-10-01 20:32:19.371668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:24.278 [2024-10-01 20:32:19.371676] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:24.278 [2024-10-01 20:32:19.371685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:24.278 [2024-10-01 20:32:19.371721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:24.278 [2024-10-01 20:32:19.371729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:24.278 [2024-10-01 20:32:19.371736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:24.278 [2024-10-01 20:32:19.371743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:24.278 [2024-10-01 20:32:19.371750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:24.278 [2024-10-01 20:32:19.371757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:24.278 [2024-10-01 20:32:19.371764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:24.278 [2024-10-01 20:32:19.371771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:24.278 [2024-10-01 20:32:19.371778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:24.278 [2024-10-01 20:32:19.371785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:24.278 [2024-10-01 20:32:19.371792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:24.278 [2024-10-01 20:32:19.371799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:24.278 [2024-10-01 20:32:19.371806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:24.278 [2024-10-01 20:32:19.371813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:24.278 [2024-10-01 20:32:19.371820] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:24.278 [2024-10-01 20:32:19.371827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:24.278 [2024-10-01 20:32:19.371835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:24.279 [2024-10-01 20:32:19.371843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:24.279 [2024-10-01 20:32:19.371851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:24.279 [2024-10-01 20:32:19.371859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:24.279 [2024-10-01 20:32:19.371866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.279 [2024-10-01 20:32:19.371874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:24.279 [2024-10-01 20:32:19.371881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:34:24.279 [2024-10-01 20:32:19.371888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.279 [2024-10-01 20:32:19.399102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.279 [2024-10-01 20:32:19.399154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:24.279 [2024-10-01 20:32:19.399167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.167 ms 00:34:24.279 [2024-10-01 20:32:19.399175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.279 [2024-10-01 20:32:19.399270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.279 [2024-10-01 20:32:19.399279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:24.279 [2024-10-01 20:32:19.399288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:34:24.279 [2024-10-01 20:32:19.399295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.279 [2024-10-01 20:32:19.431866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.279 [2024-10-01 20:32:19.431913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:24.279 [2024-10-01 20:32:19.431926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.508 ms 00:34:24.279 [2024-10-01 20:32:19.431934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.279 [2024-10-01 20:32:19.431976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.279 [2024-10-01 20:32:19.431985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:24.279 [2024-10-01 20:32:19.431993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:24.279 [2024-10-01 20:32:19.432001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.279 [2024-10-01 20:32:19.432436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.279 [2024-10-01 20:32:19.432461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:24.279 [2024-10-01 20:32:19.432470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:34:24.279 [2024-10-01 20:32:19.432481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.279 [2024-10-01 20:32:19.432601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.279 [2024-10-01 20:32:19.432610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:24.279 [2024-10-01 20:32:19.432618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:34:24.279 [2024-10-01 20:32:19.432625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.279 [2024-10-01 20:32:19.445922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.279 [2024-10-01 20:32:19.445954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:24.279 [2024-10-01 20:32:19.445964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.278 ms 00:34:24.279 [2024-10-01 20:32:19.445971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.279 [2024-10-01 20:32:19.459048] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:24.279 [2024-10-01 20:32:19.459088] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:24.279 [2024-10-01 20:32:19.459101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.279 [2024-10-01 20:32:19.459109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:24.279 [2024-10-01 20:32:19.459119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.012 ms 00:34:24.279 [2024-10-01 20:32:19.459126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.279 [2024-10-01 20:32:19.483799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.279 [2024-10-01 20:32:19.483855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:24.279 [2024-10-01 20:32:19.483867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.542 ms 00:34:24.279 [2024-10-01 20:32:19.483875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.537 [2024-10-01 20:32:19.495862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.537 [2024-10-01 20:32:19.495906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:24.537 [2024-10-01 20:32:19.495917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.927 ms 00:34:24.537 [2024-10-01 20:32:19.495924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.537 [2024-10-01 20:32:19.507782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.537 [2024-10-01 20:32:19.507835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:24.537 [2024-10-01 20:32:19.507847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.808 ms 00:34:24.537 [2024-10-01 20:32:19.507855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.537 [2024-10-01 20:32:19.508530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.537 [2024-10-01 20:32:19.508556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:24.537 [2024-10-01 20:32:19.508565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:34:24.537 [2024-10-01 20:32:19.508573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.537 [2024-10-01 20:32:19.565349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.537 [2024-10-01 20:32:19.565404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:24.537 [2024-10-01 20:32:19.565418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.759 ms 00:34:24.537 [2024-10-01 20:32:19.565425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.537 [2024-10-01 20:32:19.576748] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:24.537 [2024-10-01 20:32:19.579602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.537 [2024-10-01 20:32:19.579638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:24.537 [2024-10-01 20:32:19.579651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.119 ms 00:34:24.537 [2024-10-01 20:32:19.579665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.537 [2024-10-01 20:32:19.579782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.537 [2024-10-01 20:32:19.579794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:24.537 [2024-10-01 20:32:19.579803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:24.537 [2024-10-01 20:32:19.579811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.537 [2024-10-01 20:32:19.579875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.537 [2024-10-01 20:32:19.579891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:24.537 [2024-10-01 20:32:19.579900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:34:24.537 [2024-10-01 20:32:19.579908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.537 [2024-10-01 20:32:19.579930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.537 [2024-10-01 20:32:19.579938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:24.537 [2024-10-01 20:32:19.579946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:24.537 [2024-10-01 20:32:19.579953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.537 [2024-10-01 20:32:19.579983] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:24.537 [2024-10-01 20:32:19.579992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.537 [2024-10-01 20:32:19.579999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:24.537 [2024-10-01 20:32:19.580010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:24.537 [2024-10-01 20:32:19.580017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.537 [2024-10-01 20:32:19.604711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.537 [2024-10-01 20:32:19.604757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:24.537 [2024-10-01 20:32:19.604771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.676 ms 00:34:24.537 [2024-10-01 20:32:19.604779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.537 [2024-10-01 20:32:19.604854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.537 [2024-10-01 20:32:19.604865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:24.537 [2024-10-01 20:32:19.604873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:24.537 [2024-10-01 20:32:19.604880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.537 [2024-10-01 20:32:19.605882] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.968 ms, result 0 00:34:48.853  Copying: 40/1024 [MB] (40 MBps) Copying: 81/1024 [MB] (40 MBps) Copying: 123/1024 [MB] (41 MBps) Copying: 163/1024 [MB] (40 MBps) Copying: 201/1024 [MB] (38 MBps) Copying: 242/1024 [MB] (40 MBps) Copying: 275/1024 [MB] (33 MBps) Copying: 316/1024 [MB] (40 MBps) Copying: 355/1024 [MB] (39 MBps) Copying: 393/1024 [MB] (38 MBps) Copying: 428/1024 [MB] (34 MBps) Copying: 474/1024 [MB] (45 MBps) Copying: 520/1024 [MB] (45 MBps) Copying: 565/1024 [MB] (45 MBps) Copying: 609/1024 [MB] (44 MBps) Copying: 655/1024 [MB] (45 MBps) Copying: 700/1024 [MB] (45 MBps) Copying: 745/1024 [MB] (45 MBps) Copying: 791/1024 [MB] (45 MBps) Copying: 837/1024 [MB] (46 MBps) Copying: 882/1024 [MB] (45 MBps) Copying: 925/1024 [MB] (42 MBps) Copying: 970/1024 [MB] (44 MBps) Copying: 1013/1024 [MB] (43 MBps) Copying: 1024/1024 [MB] (average 42 MBps)[2024-10-01 20:32:43.850118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.853 [2024-10-01 20:32:43.850165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:48.853 [2024-10-01 20:32:43.850179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:48.853 [2024-10-01 20:32:43.850197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.853 [2024-10-01 20:32:43.850218] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:48.853 [2024-10-01 20:32:43.852964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.853 [2024-10-01 20:32:43.852998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:48.853 [2024-10-01 20:32:43.853008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:34:48.853 [2024-10-01 20:32:43.853017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.853 [2024-10-01 20:32:43.854484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.853 [2024-10-01 20:32:43.854517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:48.853 [2024-10-01 20:32:43.854526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:34:48.853 [2024-10-01 20:32:43.854534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.853 [2024-10-01 20:32:43.868046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.853 [2024-10-01 20:32:43.868086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:48.853 [2024-10-01 20:32:43.868097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.492 ms 00:34:48.853 [2024-10-01 20:32:43.868104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.853 [2024-10-01 20:32:43.874233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.853 [2024-10-01 20:32:43.874261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:48.853 [2024-10-01 20:32:43.874270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.101 ms 00:34:48.853 [2024-10-01 20:32:43.874279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.853 [2024-10-01 20:32:43.897998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.853 [2024-10-01 20:32:43.898036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:48.853 [2024-10-01 20:32:43.898047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.640 ms 00:34:48.853 [2024-10-01 20:32:43.898055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.853 [2024-10-01 20:32:43.911229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.853 [2024-10-01 20:32:43.911265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:48.853 [2024-10-01 20:32:43.911276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.141 ms 00:34:48.853 [2024-10-01 20:32:43.911285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.853 [2024-10-01 20:32:43.911406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.853 [2024-10-01 20:32:43.911415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:48.853 [2024-10-01 20:32:43.911424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:34:48.853 [2024-10-01 20:32:43.911432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.853 [2024-10-01 20:32:43.934081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.853 [2024-10-01 20:32:43.934111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:48.853 [2024-10-01 20:32:43.934120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.636 ms 00:34:48.853 [2024-10-01 20:32:43.934128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.853 [2024-10-01 20:32:43.956380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.853 [2024-10-01 20:32:43.956415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:48.853 [2024-10-01 20:32:43.956425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.223 ms 00:34:48.853 [2024-10-01 20:32:43.956433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.853 [2024-10-01 20:32:43.977939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.853 [2024-10-01 20:32:43.977970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:48.853 [2024-10-01 20:32:43.977980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.475 ms 00:34:48.853 [2024-10-01 20:32:43.977988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.853 [2024-10-01 20:32:43.999890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.853 [2024-10-01 20:32:43.999920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:48.853 [2024-10-01 20:32:43.999929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.833 ms 00:34:48.853 [2024-10-01 20:32:43.999937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.853 [2024-10-01 20:32:43.999966] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:48.853 [2024-10-01 20:32:43.999980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:48.853 [2024-10-01 20:32:43.999989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:48.853 [2024-10-01 20:32:43.999997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:48.853 [2024-10-01 20:32:44.000005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:48.853 [2024-10-01 20:32:44.000012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:48.853 [2024-10-01 20:32:44.000019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:48.853 [2024-10-01 20:32:44.000027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:48.853 [2024-10-01 20:32:44.000034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:48.853 [2024-10-01 20:32:44.000042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:48.854 [2024-10-01 20:32:44.000703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:48.855 [2024-10-01 20:32:44.000711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:48.855 [2024-10-01 20:32:44.000726] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:48.855 [2024-10-01 20:32:44.000734] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73992bbb-6107-4f6f-8507-ce356e0255e3 00:34:48.855 [2024-10-01 20:32:44.000742] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:48.855 [2024-10-01 20:32:44.000750] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:48.855 [2024-10-01 20:32:44.000757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:48.855 [2024-10-01 20:32:44.000765] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:48.855 [2024-10-01 20:32:44.000771] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:48.855 [2024-10-01 20:32:44.000782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:48.855 [2024-10-01 20:32:44.000789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:48.855 [2024-10-01 20:32:44.000796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:48.855 [2024-10-01 20:32:44.000802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:48.855 [2024-10-01 20:32:44.000809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.855 [2024-10-01 20:32:44.000823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:48.855 [2024-10-01 20:32:44.000831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:34:48.855 [2024-10-01 20:32:44.000838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.855 [2024-10-01 20:32:44.013265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.855 [2024-10-01 20:32:44.013295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:48.855 [2024-10-01 20:32:44.013309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.411 ms 00:34:48.855 [2024-10-01 20:32:44.013319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.855 [2024-10-01 20:32:44.013661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.855 [2024-10-01 20:32:44.013675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:48.855 [2024-10-01 20:32:44.013684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:34:48.855 [2024-10-01 20:32:44.013704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.855 [2024-10-01 20:32:44.042239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.855 [2024-10-01 20:32:44.042275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:48.855 [2024-10-01 20:32:44.042288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.855 [2024-10-01 20:32:44.042296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.855 [2024-10-01 20:32:44.042351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.855 [2024-10-01 20:32:44.042359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:48.855 [2024-10-01 20:32:44.042367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.855 [2024-10-01 20:32:44.042374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.855 [2024-10-01 20:32:44.042425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.855 [2024-10-01 20:32:44.042434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:48.855 [2024-10-01 20:32:44.042443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.855 [2024-10-01 20:32:44.042453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.855 [2024-10-01 20:32:44.042468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.855 [2024-10-01 20:32:44.042476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:48.855 [2024-10-01 20:32:44.042483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.855 [2024-10-01 20:32:44.042490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.113 [2024-10-01 20:32:44.119973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.113 [2024-10-01 20:32:44.120018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:49.113 [2024-10-01 20:32:44.120028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.113 [2024-10-01 20:32:44.120040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.113 [2024-10-01 20:32:44.182396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.113 [2024-10-01 20:32:44.182442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:49.113 [2024-10-01 20:32:44.182452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.113 [2024-10-01 20:32:44.182459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.113 [2024-10-01 20:32:44.182524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.113 [2024-10-01 20:32:44.182533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:49.113 [2024-10-01 20:32:44.182541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.113 [2024-10-01 20:32:44.182549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.113 [2024-10-01 20:32:44.182593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.113 [2024-10-01 20:32:44.182602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:49.113 [2024-10-01 20:32:44.182610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.113 [2024-10-01 20:32:44.182617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.113 [2024-10-01 20:32:44.182712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.113 [2024-10-01 20:32:44.182723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:49.113 [2024-10-01 20:32:44.182731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.113 [2024-10-01 20:32:44.182738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.113 [2024-10-01 20:32:44.182767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.113 [2024-10-01 20:32:44.182778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:49.113 [2024-10-01 20:32:44.182786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.113 [2024-10-01 20:32:44.182793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.113 [2024-10-01 20:32:44.182824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.113 [2024-10-01 20:32:44.182833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:49.113 [2024-10-01 20:32:44.182840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.113 [2024-10-01 20:32:44.182847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.113 [2024-10-01 20:32:44.182886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.113 [2024-10-01 20:32:44.182896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:49.113 [2024-10-01 20:32:44.182903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.113 [2024-10-01 20:32:44.182910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.113 [2024-10-01 20:32:44.183013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.872 ms, result 0 00:34:51.014 00:34:51.014 00:34:51.014 20:32:45 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:51.014 [2024-10-01 20:32:46.044233] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:34:51.014 [2024-10-01 20:32:46.044359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76266 ] 00:34:51.014 [2024-10-01 20:32:46.192055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.272 [2024-10-01 20:32:46.351567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.530 [2024-10-01 20:32:46.719139] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:51.530 [2024-10-01 20:32:46.719200] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:51.790 [2024-10-01 20:32:46.866837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.790 [2024-10-01 20:32:46.866882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:51.790 [2024-10-01 20:32:46.866893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:51.790 [2024-10-01 20:32:46.866903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.790 [2024-10-01 20:32:46.866941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.790 [2024-10-01 20:32:46.866950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:51.790 [2024-10-01 20:32:46.866957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:34:51.790 [2024-10-01 20:32:46.866963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.790 [2024-10-01 20:32:46.866976] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:51.790 [2024-10-01 20:32:46.867544] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:51.790 [2024-10-01 20:32:46.867562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.790 [2024-10-01 20:32:46.867568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:51.790 [2024-10-01 20:32:46.867575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:34:51.790 [2024-10-01 20:32:46.867582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.790 [2024-10-01 20:32:46.868833] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:51.790 [2024-10-01 20:32:46.878898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.790 [2024-10-01 20:32:46.878933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:51.790 [2024-10-01 20:32:46.878944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.066 ms 00:34:51.790 [2024-10-01 20:32:46.878951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.790 [2024-10-01 20:32:46.879000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.790 [2024-10-01 20:32:46.879008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:51.790 [2024-10-01 20:32:46.879015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:34:51.790 [2024-10-01 20:32:46.879021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.790 [2024-10-01 20:32:46.884887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.790 [2024-10-01 20:32:46.884918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:51.790 [2024-10-01 20:32:46.884926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.825 ms 00:34:51.790 [2024-10-01 20:32:46.884932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.790 [2024-10-01 20:32:46.884988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.790 [2024-10-01 20:32:46.884996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:51.790 [2024-10-01 20:32:46.885002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:34:51.790 [2024-10-01 20:32:46.885009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.790 [2024-10-01 20:32:46.885054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.790 [2024-10-01 20:32:46.885062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:51.790 [2024-10-01 20:32:46.885068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:51.790 [2024-10-01 20:32:46.885074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.790 [2024-10-01 20:32:46.885091] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:51.790 [2024-10-01 20:32:46.888029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.790 [2024-10-01 20:32:46.888054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:51.790 [2024-10-01 20:32:46.888062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.942 ms 00:34:51.790 [2024-10-01 20:32:46.888068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.790 [2024-10-01 20:32:46.888091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.790 [2024-10-01 20:32:46.888098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:51.790 [2024-10-01 20:32:46.888104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:51.790 [2024-10-01 20:32:46.888112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.790 [2024-10-01 20:32:46.888128] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:51.790 [2024-10-01 20:32:46.888143] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:51.790 [2024-10-01 20:32:46.888170] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:51.790 [2024-10-01 20:32:46.888182] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:51.790 [2024-10-01 20:32:46.888264] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:51.790 [2024-10-01 20:32:46.888273] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:51.790 [2024-10-01 20:32:46.888284] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:51.790 [2024-10-01 20:32:46.888292] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:51.790 [2024-10-01 20:32:46.888299] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:51.790 [2024-10-01 20:32:46.888306] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:51.790 [2024-10-01 20:32:46.888312] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:51.790 [2024-10-01 20:32:46.888318] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:51.790 [2024-10-01 20:32:46.888323] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:51.790 [2024-10-01 20:32:46.888330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.790 [2024-10-01 20:32:46.888336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:51.790 [2024-10-01 20:32:46.888342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:34:51.790 [2024-10-01 20:32:46.888348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.790 [2024-10-01 20:32:46.888415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.790 [2024-10-01 20:32:46.888421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:51.790 [2024-10-01 20:32:46.888427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:34:51.790 [2024-10-01 20:32:46.888433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.790 [2024-10-01 20:32:46.888512] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:51.790 [2024-10-01 20:32:46.888527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:51.790 [2024-10-01 20:32:46.888533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:51.790 [2024-10-01 20:32:46.888539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:51.790 [2024-10-01 20:32:46.888545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:51.790 [2024-10-01 20:32:46.888551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:51.791 [2024-10-01 20:32:46.888557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:51.791 [2024-10-01 20:32:46.888562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:51.791 [2024-10-01 20:32:46.888568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:51.791 [2024-10-01 20:32:46.888573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:51.791 [2024-10-01 20:32:46.888579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:51.791 [2024-10-01 20:32:46.888584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:51.791 [2024-10-01 20:32:46.888589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:51.791 [2024-10-01 20:32:46.888599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:51.791 [2024-10-01 20:32:46.888606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:51.791 [2024-10-01 20:32:46.888612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:51.791 [2024-10-01 20:32:46.888617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:51.791 [2024-10-01 20:32:46.888622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:51.791 [2024-10-01 20:32:46.888628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:51.791 [2024-10-01 20:32:46.888633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:51.791 [2024-10-01 20:32:46.888638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:51.791 [2024-10-01 20:32:46.888643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:51.791 [2024-10-01 20:32:46.888648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:51.791 [2024-10-01 20:32:46.888654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:51.791 [2024-10-01 20:32:46.888659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:51.791 [2024-10-01 20:32:46.888665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:51.791 [2024-10-01 20:32:46.888670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:51.791 [2024-10-01 20:32:46.888675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:51.791 [2024-10-01 20:32:46.888681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:51.791 [2024-10-01 20:32:46.888686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:51.791 [2024-10-01 20:32:46.888701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:51.791 [2024-10-01 20:32:46.888707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:51.791 [2024-10-01 20:32:46.888712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:51.791 [2024-10-01 20:32:46.888717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:51.791 [2024-10-01 20:32:46.888722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:51.791 [2024-10-01 20:32:46.888728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:51.791 [2024-10-01 20:32:46.888733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:51.791 [2024-10-01 20:32:46.888738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:51.791 [2024-10-01 20:32:46.888743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:51.791 [2024-10-01 20:32:46.888748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:51.791 [2024-10-01 20:32:46.888754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:51.791 [2024-10-01 20:32:46.888759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:51.791 [2024-10-01 20:32:46.888764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:51.791 [2024-10-01 20:32:46.888769] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:51.791 [2024-10-01 20:32:46.888777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:51.791 [2024-10-01 20:32:46.888783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:51.791 [2024-10-01 20:32:46.888790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:51.791 [2024-10-01 20:32:46.888796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:51.791 [2024-10-01 20:32:46.888802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:51.791 [2024-10-01 20:32:46.888807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:51.791 [2024-10-01 20:32:46.888813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:51.791 [2024-10-01 20:32:46.888818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:51.791 [2024-10-01 20:32:46.888823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:51.791 [2024-10-01 20:32:46.888829] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:51.791 [2024-10-01 20:32:46.888836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:51.791 [2024-10-01 20:32:46.888843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:51.791 [2024-10-01 20:32:46.888849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:51.791 [2024-10-01 20:32:46.888855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:51.791 [2024-10-01 20:32:46.888861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:51.791 [2024-10-01 20:32:46.888866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:51.791 [2024-10-01 20:32:46.888872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:51.791 [2024-10-01 20:32:46.888877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:51.791 [2024-10-01 20:32:46.888883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:51.791 [2024-10-01 20:32:46.888889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:51.791 [2024-10-01 20:32:46.888895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:51.791 [2024-10-01 20:32:46.888900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:51.791 [2024-10-01 20:32:46.888906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:51.791 [2024-10-01 20:32:46.888911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:51.791 [2024-10-01 20:32:46.888917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:51.791 [2024-10-01 20:32:46.888922] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:51.791 [2024-10-01 20:32:46.888928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:51.791 [2024-10-01 20:32:46.888935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:51.791 [2024-10-01 20:32:46.888941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:51.791 [2024-10-01 20:32:46.888946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:51.791 [2024-10-01 20:32:46.888952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:51.791 [2024-10-01 20:32:46.888958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.791 [2024-10-01 20:32:46.888963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:51.791 [2024-10-01 20:32:46.888969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:34:51.791 [2024-10-01 20:32:46.888976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.791 [2024-10-01 20:32:46.911404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.791 [2024-10-01 20:32:46.911439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:51.791 [2024-10-01 20:32:46.911448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.382 ms 00:34:51.791 [2024-10-01 20:32:46.911457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.791 [2024-10-01 20:32:46.911525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.791 [2024-10-01 20:32:46.911533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:51.791 [2024-10-01 20:32:46.911541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:34:51.791 [2024-10-01 20:32:46.911547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.791 [2024-10-01 20:32:46.937923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.791 [2024-10-01 20:32:46.937967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:51.791 [2024-10-01 20:32:46.937977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.331 ms 00:34:51.791 [2024-10-01 20:32:46.937983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.791 [2024-10-01 20:32:46.938019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.791 [2024-10-01 20:32:46.938027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:51.791 [2024-10-01 20:32:46.938034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:51.791 [2024-10-01 20:32:46.938040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.791 [2024-10-01 20:32:46.938441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.791 [2024-10-01 20:32:46.938462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:51.791 [2024-10-01 20:32:46.938474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:34:51.791 [2024-10-01 20:32:46.938481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.791 [2024-10-01 20:32:46.938600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.791 [2024-10-01 20:32:46.938613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:51.791 [2024-10-01 20:32:46.938620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:34:51.791 [2024-10-01 20:32:46.938626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.791 [2024-10-01 20:32:46.949772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.791 [2024-10-01 20:32:46.949801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:51.791 [2024-10-01 20:32:46.949809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.129 ms 00:34:51.791 [2024-10-01 20:32:46.949816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.792 [2024-10-01 20:32:46.960010] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:34:51.792 [2024-10-01 20:32:46.960041] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:51.792 [2024-10-01 20:32:46.960050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.792 [2024-10-01 20:32:46.960057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:51.792 [2024-10-01 20:32:46.960065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.152 ms 00:34:51.792 [2024-10-01 20:32:46.960071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.792 [2024-10-01 20:32:46.979126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.792 [2024-10-01 20:32:46.979160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:51.792 [2024-10-01 20:32:46.979169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.021 ms 00:34:51.792 [2024-10-01 20:32:46.979176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.792 [2024-10-01 20:32:46.988496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.792 [2024-10-01 20:32:46.988527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:51.792 [2024-10-01 20:32:46.988537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.281 ms 00:34:51.792 [2024-10-01 20:32:46.988542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.792 [2024-10-01 20:32:46.997271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.792 [2024-10-01 20:32:46.997302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:51.792 [2024-10-01 20:32:46.997310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.700 ms 00:34:51.792 [2024-10-01 20:32:46.997316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.792 [2024-10-01 20:32:46.997824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.792 [2024-10-01 20:32:46.997845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:51.792 [2024-10-01 20:32:46.997853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:34:51.792 [2024-10-01 20:32:46.997859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.108 [2024-10-01 20:32:47.043495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.108 [2024-10-01 20:32:47.043547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:52.108 [2024-10-01 20:32:47.043559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.620 ms 00:34:52.108 [2024-10-01 20:32:47.043566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.108 [2024-10-01 20:32:47.051990] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:52.108 [2024-10-01 20:32:47.054758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.108 [2024-10-01 20:32:47.054787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:52.108 [2024-10-01 20:32:47.054800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.148 ms 00:34:52.108 [2024-10-01 20:32:47.054807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.108 [2024-10-01 20:32:47.054878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.108 [2024-10-01 20:32:47.054886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:52.108 [2024-10-01 20:32:47.054893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:52.108 [2024-10-01 20:32:47.054899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.108 [2024-10-01 20:32:47.054954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.108 [2024-10-01 20:32:47.054966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:52.108 [2024-10-01 20:32:47.054973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:34:52.108 [2024-10-01 20:32:47.054981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.108 [2024-10-01 20:32:47.054996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.108 [2024-10-01 20:32:47.055003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:52.108 [2024-10-01 20:32:47.055010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:52.108 [2024-10-01 20:32:47.055016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.108 [2024-10-01 20:32:47.055041] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:52.108 [2024-10-01 20:32:47.055048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.108 [2024-10-01 20:32:47.055056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:52.108 [2024-10-01 20:32:47.055063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:52.108 [2024-10-01 20:32:47.055068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.108 [2024-10-01 20:32:47.073559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.108 [2024-10-01 20:32:47.073593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:52.108 [2024-10-01 20:32:47.073603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.474 ms 00:34:52.108 [2024-10-01 20:32:47.073610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.108 [2024-10-01 20:32:47.073672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.108 [2024-10-01 20:32:47.073680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:52.108 [2024-10-01 20:32:47.073686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:34:52.108 [2024-10-01 20:32:47.073702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.108 [2024-10-01 20:32:47.074817] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 207.619 ms, result 0 00:35:16.393  Copying: 4812/1048576 [kB] (4812 kBps) Copying: 49/1024 [MB] (44 MBps) Copying: 93/1024 [MB] (44 MBps) Copying: 140/1024 [MB] (46 MBps) Copying: 184/1024 [MB] (44 MBps) Copying: 233/1024 [MB] (48 MBps) Copying: 278/1024 [MB] (45 MBps) Copying: 322/1024 [MB] (44 MBps) Copying: 365/1024 [MB] (42 MBps) Copying: 413/1024 [MB] (48 MBps) Copying: 456/1024 [MB] (42 MBps) Copying: 500/1024 [MB] (43 MBps) Copying: 545/1024 [MB] (44 MBps) Copying: 590/1024 [MB] (45 MBps) Copying: 632/1024 [MB] (41 MBps) Copying: 677/1024 [MB] (45 MBps) Copying: 722/1024 [MB] (44 MBps) Copying: 765/1024 [MB] (43 MBps) Copying: 810/1024 [MB] (44 MBps) Copying: 853/1024 [MB] (42 MBps) Copying: 897/1024 [MB] (43 MBps) Copying: 939/1024 [MB] (42 MBps) Copying: 982/1024 [MB] (42 MBps) Copying: 1024/1024 [MB] (average 42 MBps)[2024-10-01 20:33:11.385587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.393 [2024-10-01 20:33:11.385654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:16.393 [2024-10-01 20:33:11.385670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:16.393 [2024-10-01 20:33:11.385679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.393 [2024-10-01 20:33:11.385716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:16.393 [2024-10-01 20:33:11.389476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.393 [2024-10-01 20:33:11.389520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:16.393 [2024-10-01 20:33:11.389540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.740 ms 00:35:16.393 [2024-10-01 20:33:11.389552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.393 [2024-10-01 20:33:11.389928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.393 [2024-10-01 20:33:11.389959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:16.393 [2024-10-01 20:33:11.389971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:35:16.393 [2024-10-01 20:33:11.389982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.393 [2024-10-01 20:33:11.401422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.393 [2024-10-01 20:33:11.401459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:16.393 [2024-10-01 20:33:11.401471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.419 ms 00:35:16.393 [2024-10-01 20:33:11.401478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.393 [2024-10-01 20:33:11.408182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.393 [2024-10-01 20:33:11.408215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:16.393 [2024-10-01 20:33:11.408225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.675 ms 00:35:16.393 [2024-10-01 20:33:11.408234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.393 [2024-10-01 20:33:11.431713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.393 [2024-10-01 20:33:11.431748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:16.393 [2024-10-01 20:33:11.431758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.426 ms 00:35:16.393 [2024-10-01 20:33:11.431766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.393 [2024-10-01 20:33:11.445590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.393 [2024-10-01 20:33:11.445636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:16.393 [2024-10-01 20:33:11.445649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.792 ms 00:35:16.393 [2024-10-01 20:33:11.445658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.393 [2024-10-01 20:33:11.499488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.393 [2024-10-01 20:33:11.499553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:16.393 [2024-10-01 20:33:11.499565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.804 ms 00:35:16.393 [2024-10-01 20:33:11.499573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.393 [2024-10-01 20:33:11.523308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.393 [2024-10-01 20:33:11.523349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:16.394 [2024-10-01 20:33:11.523361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.720 ms 00:35:16.394 [2024-10-01 20:33:11.523368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.394 [2024-10-01 20:33:11.545647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.394 [2024-10-01 20:33:11.545688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:16.394 [2024-10-01 20:33:11.545707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.245 ms 00:35:16.394 [2024-10-01 20:33:11.545715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.394 [2024-10-01 20:33:11.567218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.394 [2024-10-01 20:33:11.567256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:16.394 [2024-10-01 20:33:11.567267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.469 ms 00:35:16.394 [2024-10-01 20:33:11.567275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.394 [2024-10-01 20:33:11.589041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.394 [2024-10-01 20:33:11.589077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:16.394 [2024-10-01 20:33:11.589087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.712 ms 00:35:16.394 [2024-10-01 20:33:11.589095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.394 [2024-10-01 20:33:11.589125] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:16.394 [2024-10-01 20:33:11.589138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131584 / 261120 wr_cnt: 1 state: open 00:35:16.394 [2024-10-01 20:33:11.589149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:16.394 [2024-10-01 20:33:11.589736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:16.395 [2024-10-01 20:33:11.589909] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:16.395 [2024-10-01 20:33:11.589920] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73992bbb-6107-4f6f-8507-ce356e0255e3 00:35:16.395 [2024-10-01 20:33:11.589928] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131584 00:35:16.395 [2024-10-01 20:33:11.589935] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 132544 00:35:16.395 [2024-10-01 20:33:11.589942] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131584 00:35:16.395 [2024-10-01 20:33:11.589951] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:35:16.395 [2024-10-01 20:33:11.589958] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:16.395 [2024-10-01 20:33:11.589965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:16.395 [2024-10-01 20:33:11.589973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:16.395 [2024-10-01 20:33:11.589979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:16.395 [2024-10-01 20:33:11.589986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:16.395 [2024-10-01 20:33:11.589993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.395 [2024-10-01 20:33:11.590001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:16.395 [2024-10-01 20:33:11.590015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:35:16.395 [2024-10-01 20:33:11.590028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.395 [2024-10-01 20:33:11.601938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.395 [2024-10-01 20:33:11.601972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:16.395 [2024-10-01 20:33:11.601982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.877 ms 00:35:16.395 [2024-10-01 20:33:11.601990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.395 [2024-10-01 20:33:11.602337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.395 [2024-10-01 20:33:11.602353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:16.395 [2024-10-01 20:33:11.602366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:35:16.395 [2024-10-01 20:33:11.602374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.654 [2024-10-01 20:33:11.630784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.654 [2024-10-01 20:33:11.630823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:16.654 [2024-10-01 20:33:11.630834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.654 [2024-10-01 20:33:11.630841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.654 [2024-10-01 20:33:11.630902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.654 [2024-10-01 20:33:11.630910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:16.654 [2024-10-01 20:33:11.630919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.654 [2024-10-01 20:33:11.630927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.654 [2024-10-01 20:33:11.630981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.654 [2024-10-01 20:33:11.630991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:16.654 [2024-10-01 20:33:11.630998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.654 [2024-10-01 20:33:11.631005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.654 [2024-10-01 20:33:11.631020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.654 [2024-10-01 20:33:11.631027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:16.654 [2024-10-01 20:33:11.631034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.654 [2024-10-01 20:33:11.631044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.654 [2024-10-01 20:33:11.708002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.654 [2024-10-01 20:33:11.708056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:16.654 [2024-10-01 20:33:11.708067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.654 [2024-10-01 20:33:11.708075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.654 [2024-10-01 20:33:11.769911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.654 [2024-10-01 20:33:11.769963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:16.654 [2024-10-01 20:33:11.769979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.654 [2024-10-01 20:33:11.769986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.654 [2024-10-01 20:33:11.770052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.654 [2024-10-01 20:33:11.770062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:16.654 [2024-10-01 20:33:11.770070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.654 [2024-10-01 20:33:11.770077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.654 [2024-10-01 20:33:11.770110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.654 [2024-10-01 20:33:11.770118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:16.654 [2024-10-01 20:33:11.770125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.654 [2024-10-01 20:33:11.770132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.654 [2024-10-01 20:33:11.770221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.654 [2024-10-01 20:33:11.770231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:16.654 [2024-10-01 20:33:11.770239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.654 [2024-10-01 20:33:11.770246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.654 [2024-10-01 20:33:11.770273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.654 [2024-10-01 20:33:11.770281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:16.654 [2024-10-01 20:33:11.770289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.654 [2024-10-01 20:33:11.770295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.654 [2024-10-01 20:33:11.770327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.654 [2024-10-01 20:33:11.770335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:16.654 [2024-10-01 20:33:11.770343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.654 [2024-10-01 20:33:11.770350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.654 [2024-10-01 20:33:11.770387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.654 [2024-10-01 20:33:11.770396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:16.654 [2024-10-01 20:33:11.770403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.654 [2024-10-01 20:33:11.770410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.654 [2024-10-01 20:33:11.770533] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.909 ms, result 0 00:35:18.084 00:35:18.084 00:35:18.084 20:33:12 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:19.982 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:19.982 20:33:15 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:19.982 20:33:15 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:35:19.982 20:33:15 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:20.241 20:33:15 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:20.241 20:33:15 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:20.241 Process with pid 75184 is not found 00:35:20.241 Remove shared memory files 00:35:20.241 20:33:15 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 75184 00:35:20.241 20:33:15 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 75184 ']' 00:35:20.241 20:33:15 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 75184 00:35:20.241 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (75184) - No such process 00:35:20.241 20:33:15 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 75184 is not found' 00:35:20.241 20:33:15 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:35:20.241 20:33:15 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:20.241 20:33:15 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:35:20.241 20:33:15 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:35:20.241 20:33:15 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:35:20.241 20:33:15 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:20.241 20:33:15 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:35:20.241 ************************************ 00:35:20.241 END TEST ftl_restore 00:35:20.241 ************************************ 00:35:20.241 00:35:20.241 real 2m16.003s 00:35:20.241 user 2m5.983s 00:35:20.241 sys 0m12.085s 00:35:20.241 20:33:15 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:20.241 20:33:15 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:35:20.241 20:33:15 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:35:20.241 20:33:15 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:35:20.241 20:33:15 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:20.241 20:33:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:20.241 ************************************ 00:35:20.241 START TEST ftl_dirty_shutdown 00:35:20.241 ************************************ 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:35:20.241 * Looking for test storage... 00:35:20.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:20.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.241 --rc genhtml_branch_coverage=1 00:35:20.241 --rc genhtml_function_coverage=1 00:35:20.241 --rc genhtml_legend=1 00:35:20.241 --rc geninfo_all_blocks=1 00:35:20.241 --rc geninfo_unexecuted_blocks=1 00:35:20.241 00:35:20.241 ' 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:20.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.241 --rc genhtml_branch_coverage=1 00:35:20.241 --rc genhtml_function_coverage=1 00:35:20.241 --rc genhtml_legend=1 00:35:20.241 --rc geninfo_all_blocks=1 00:35:20.241 --rc geninfo_unexecuted_blocks=1 00:35:20.241 00:35:20.241 ' 00:35:20.241 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:20.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.241 --rc genhtml_branch_coverage=1 00:35:20.242 --rc genhtml_function_coverage=1 00:35:20.242 --rc genhtml_legend=1 00:35:20.242 --rc geninfo_all_blocks=1 00:35:20.242 --rc geninfo_unexecuted_blocks=1 00:35:20.242 00:35:20.242 ' 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:20.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.242 --rc genhtml_branch_coverage=1 00:35:20.242 --rc genhtml_function_coverage=1 00:35:20.242 --rc genhtml_legend=1 00:35:20.242 --rc geninfo_all_blocks=1 00:35:20.242 --rc geninfo_unexecuted_blocks=1 00:35:20.242 00:35:20.242 ' 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=76641 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76641 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 76641 ']' 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:20.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:20.242 20:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:35:20.501 [2024-10-01 20:33:15.525930] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:35:20.501 [2024-10-01 20:33:15.526396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76641 ] 00:35:20.501 [2024-10-01 20:33:15.668080] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.759 [2024-10-01 20:33:15.859869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.692 20:33:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:21.692 20:33:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:35:21.692 20:33:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:35:21.692 20:33:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:35:21.692 20:33:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:35:21.692 20:33:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:35:21.692 20:33:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:35:21.692 20:33:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:21.950 20:33:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:35:21.950 20:33:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:35:21.950 20:33:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:35:21.950 20:33:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:35:21.950 20:33:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:35:21.950 20:33:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:35:21.950 20:33:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:35:21.950 20:33:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:35:21.950 20:33:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:35:21.950 { 00:35:21.950 "name": "nvme0n1", 00:35:21.950 "aliases": [ 00:35:21.950 "27a319db-ecbe-4ef6-acab-c8d2ab4a4519" 00:35:21.950 ], 00:35:21.950 "product_name": "NVMe disk", 00:35:21.950 "block_size": 4096, 00:35:21.950 "num_blocks": 1310720, 00:35:21.950 "uuid": "27a319db-ecbe-4ef6-acab-c8d2ab4a4519", 00:35:21.950 "numa_id": -1, 00:35:21.950 "assigned_rate_limits": { 00:35:21.950 "rw_ios_per_sec": 0, 00:35:21.950 "rw_mbytes_per_sec": 0, 00:35:21.950 "r_mbytes_per_sec": 0, 00:35:21.950 "w_mbytes_per_sec": 0 00:35:21.950 }, 00:35:21.950 "claimed": true, 00:35:21.950 "claim_type": "read_many_write_one", 00:35:21.950 "zoned": false, 00:35:21.950 "supported_io_types": { 00:35:21.950 "read": true, 00:35:21.950 "write": true, 00:35:21.950 "unmap": true, 00:35:21.950 "flush": true, 00:35:21.950 "reset": true, 00:35:21.950 "nvme_admin": true, 00:35:21.950 "nvme_io": true, 00:35:21.950 "nvme_io_md": false, 00:35:21.950 "write_zeroes": true, 00:35:21.950 "zcopy": false, 00:35:21.950 "get_zone_info": false, 00:35:21.950 "zone_management": false, 00:35:21.950 "zone_append": false, 00:35:21.950 "compare": true, 00:35:21.950 "compare_and_write": false, 00:35:21.950 "abort": true, 00:35:21.950 "seek_hole": false, 00:35:21.950 "seek_data": false, 00:35:21.950 "copy": true, 00:35:21.950 "nvme_iov_md": false 00:35:21.950 }, 00:35:21.950 "driver_specific": { 00:35:21.950 "nvme": [ 00:35:21.950 { 00:35:21.950 "pci_address": "0000:00:11.0", 00:35:21.950 "trid": { 00:35:21.950 "trtype": "PCIe", 00:35:21.950 "traddr": "0000:00:11.0" 00:35:21.950 }, 00:35:21.950 "ctrlr_data": { 00:35:21.950 "cntlid": 0, 00:35:21.950 "vendor_id": "0x1b36", 00:35:21.950 "model_number": "QEMU NVMe Ctrl", 00:35:21.950 "serial_number": "12341", 00:35:21.950 "firmware_revision": "8.0.0", 00:35:21.950 "subnqn": "nqn.2019-08.org.qemu:12341", 00:35:21.950 "oacs": { 00:35:21.950 "security": 0, 00:35:21.950 "format": 1, 00:35:21.950 "firmware": 0, 00:35:21.950 "ns_manage": 1 00:35:21.950 }, 00:35:21.950 "multi_ctrlr": false, 00:35:21.950 "ana_reporting": false 00:35:21.950 }, 00:35:21.950 "vs": { 00:35:21.950 "nvme_version": "1.4" 00:35:21.950 }, 00:35:21.950 "ns_data": { 00:35:21.951 "id": 1, 00:35:21.951 "can_share": false 00:35:21.951 } 00:35:21.951 } 00:35:21.951 ], 00:35:21.951 "mp_policy": "active_passive" 00:35:21.951 } 00:35:21.951 } 00:35:21.951 ]' 00:35:21.951 20:33:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:35:22.208 20:33:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:35:22.208 20:33:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:35:22.208 20:33:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:35:22.208 20:33:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:35:22.208 20:33:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:35:22.208 20:33:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:35:22.208 20:33:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:35:22.208 20:33:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:35:22.208 20:33:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:22.208 20:33:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:22.208 20:33:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=ac5cf75c-ce54-470b-b047-701a850752e2 00:35:22.208 20:33:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:35:22.208 20:33:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ac5cf75c-ce54-470b-b047-701a850752e2 00:35:22.466 20:33:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:35:22.724 20:33:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=51cd809f-b657-49b6-9753-70061678c261 00:35:22.724 20:33:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 51cd809f-b657-49b6-9753-70061678c261 00:35:22.981 20:33:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=fe1c148c-a22e-47cb-a207-65f9314a9ef5 00:35:22.981 20:33:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:35:22.981 20:33:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fe1c148c-a22e-47cb-a207-65f9314a9ef5 00:35:22.981 20:33:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:35:22.981 20:33:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:35:22.981 20:33:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=fe1c148c-a22e-47cb-a207-65f9314a9ef5 00:35:22.981 20:33:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:35:22.982 20:33:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size fe1c148c-a22e-47cb-a207-65f9314a9ef5 00:35:22.982 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=fe1c148c-a22e-47cb-a207-65f9314a9ef5 00:35:22.982 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:35:22.982 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:35:22.982 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:35:22.982 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe1c148c-a22e-47cb-a207-65f9314a9ef5 00:35:23.240 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:35:23.240 { 00:35:23.240 "name": "fe1c148c-a22e-47cb-a207-65f9314a9ef5", 00:35:23.240 "aliases": [ 00:35:23.240 "lvs/nvme0n1p0" 00:35:23.240 ], 00:35:23.240 "product_name": "Logical Volume", 00:35:23.240 "block_size": 4096, 00:35:23.240 "num_blocks": 26476544, 00:35:23.240 "uuid": "fe1c148c-a22e-47cb-a207-65f9314a9ef5", 00:35:23.240 "assigned_rate_limits": { 00:35:23.240 "rw_ios_per_sec": 0, 00:35:23.240 "rw_mbytes_per_sec": 0, 00:35:23.240 "r_mbytes_per_sec": 0, 00:35:23.240 "w_mbytes_per_sec": 0 00:35:23.240 }, 00:35:23.240 "claimed": false, 00:35:23.240 "zoned": false, 00:35:23.240 "supported_io_types": { 00:35:23.240 "read": true, 00:35:23.240 "write": true, 00:35:23.240 "unmap": true, 00:35:23.240 "flush": false, 00:35:23.240 "reset": true, 00:35:23.240 "nvme_admin": false, 00:35:23.240 "nvme_io": false, 00:35:23.240 "nvme_io_md": false, 00:35:23.240 "write_zeroes": true, 00:35:23.240 "zcopy": false, 00:35:23.240 "get_zone_info": false, 00:35:23.240 "zone_management": false, 00:35:23.240 "zone_append": false, 00:35:23.240 "compare": false, 00:35:23.240 "compare_and_write": false, 00:35:23.240 "abort": false, 00:35:23.240 "seek_hole": true, 00:35:23.240 "seek_data": true, 00:35:23.240 "copy": false, 00:35:23.240 "nvme_iov_md": false 00:35:23.240 }, 00:35:23.240 "driver_specific": { 00:35:23.240 "lvol": { 00:35:23.240 "lvol_store_uuid": "51cd809f-b657-49b6-9753-70061678c261", 00:35:23.240 "base_bdev": "nvme0n1", 00:35:23.240 "thin_provision": true, 00:35:23.240 "num_allocated_clusters": 0, 00:35:23.240 "snapshot": false, 00:35:23.240 "clone": false, 00:35:23.240 "esnap_clone": false 00:35:23.240 } 00:35:23.240 } 00:35:23.240 } 00:35:23.240 ]' 00:35:23.240 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:35:23.240 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:35:23.240 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:35:23.240 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:35:23.240 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:35:23.240 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:35:23.240 20:33:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:35:23.240 20:33:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:35:23.240 20:33:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:35:23.498 20:33:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:35:23.498 20:33:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:35:23.498 20:33:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size fe1c148c-a22e-47cb-a207-65f9314a9ef5 00:35:23.498 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=fe1c148c-a22e-47cb-a207-65f9314a9ef5 00:35:23.498 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:35:23.498 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:35:23.498 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:35:23.498 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe1c148c-a22e-47cb-a207-65f9314a9ef5 00:35:23.757 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:35:23.757 { 00:35:23.757 "name": "fe1c148c-a22e-47cb-a207-65f9314a9ef5", 00:35:23.757 "aliases": [ 00:35:23.757 "lvs/nvme0n1p0" 00:35:23.757 ], 00:35:23.757 "product_name": "Logical Volume", 00:35:23.757 "block_size": 4096, 00:35:23.757 "num_blocks": 26476544, 00:35:23.757 "uuid": "fe1c148c-a22e-47cb-a207-65f9314a9ef5", 00:35:23.757 "assigned_rate_limits": { 00:35:23.757 "rw_ios_per_sec": 0, 00:35:23.757 "rw_mbytes_per_sec": 0, 00:35:23.757 "r_mbytes_per_sec": 0, 00:35:23.757 "w_mbytes_per_sec": 0 00:35:23.757 }, 00:35:23.757 "claimed": false, 00:35:23.757 "zoned": false, 00:35:23.757 "supported_io_types": { 00:35:23.757 "read": true, 00:35:23.757 "write": true, 00:35:23.757 "unmap": true, 00:35:23.757 "flush": false, 00:35:23.757 "reset": true, 00:35:23.757 "nvme_admin": false, 00:35:23.757 "nvme_io": false, 00:35:23.757 "nvme_io_md": false, 00:35:23.757 "write_zeroes": true, 00:35:23.757 "zcopy": false, 00:35:23.757 "get_zone_info": false, 00:35:23.757 "zone_management": false, 00:35:23.757 "zone_append": false, 00:35:23.757 "compare": false, 00:35:23.757 "compare_and_write": false, 00:35:23.757 "abort": false, 00:35:23.757 "seek_hole": true, 00:35:23.757 "seek_data": true, 00:35:23.757 "copy": false, 00:35:23.757 "nvme_iov_md": false 00:35:23.757 }, 00:35:23.757 "driver_specific": { 00:35:23.757 "lvol": { 00:35:23.757 "lvol_store_uuid": "51cd809f-b657-49b6-9753-70061678c261", 00:35:23.757 "base_bdev": "nvme0n1", 00:35:23.757 "thin_provision": true, 00:35:23.757 "num_allocated_clusters": 0, 00:35:23.757 "snapshot": false, 00:35:23.757 "clone": false, 00:35:23.757 "esnap_clone": false 00:35:23.757 } 00:35:23.757 } 00:35:23.757 } 00:35:23.757 ]' 00:35:23.757 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:35:23.757 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:35:23.757 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:35:23.757 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:35:23.757 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:35:23.757 20:33:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:35:23.757 20:33:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:35:23.757 20:33:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:35:24.014 20:33:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:35:24.014 20:33:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size fe1c148c-a22e-47cb-a207-65f9314a9ef5 00:35:24.014 20:33:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=fe1c148c-a22e-47cb-a207-65f9314a9ef5 00:35:24.014 20:33:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:35:24.014 20:33:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:35:24.014 20:33:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:35:24.014 20:33:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe1c148c-a22e-47cb-a207-65f9314a9ef5 00:35:24.272 20:33:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:35:24.272 { 00:35:24.272 "name": "fe1c148c-a22e-47cb-a207-65f9314a9ef5", 00:35:24.272 "aliases": [ 00:35:24.272 "lvs/nvme0n1p0" 00:35:24.272 ], 00:35:24.272 "product_name": "Logical Volume", 00:35:24.272 "block_size": 4096, 00:35:24.272 "num_blocks": 26476544, 00:35:24.272 "uuid": "fe1c148c-a22e-47cb-a207-65f9314a9ef5", 00:35:24.272 "assigned_rate_limits": { 00:35:24.272 "rw_ios_per_sec": 0, 00:35:24.272 "rw_mbytes_per_sec": 0, 00:35:24.272 "r_mbytes_per_sec": 0, 00:35:24.272 "w_mbytes_per_sec": 0 00:35:24.272 }, 00:35:24.272 "claimed": false, 00:35:24.272 "zoned": false, 00:35:24.272 "supported_io_types": { 00:35:24.272 "read": true, 00:35:24.272 "write": true, 00:35:24.272 "unmap": true, 00:35:24.272 "flush": false, 00:35:24.272 "reset": true, 00:35:24.272 "nvme_admin": false, 00:35:24.272 "nvme_io": false, 00:35:24.272 "nvme_io_md": false, 00:35:24.272 "write_zeroes": true, 00:35:24.272 "zcopy": false, 00:35:24.272 "get_zone_info": false, 00:35:24.272 "zone_management": false, 00:35:24.272 "zone_append": false, 00:35:24.272 "compare": false, 00:35:24.272 "compare_and_write": false, 00:35:24.272 "abort": false, 00:35:24.272 "seek_hole": true, 00:35:24.272 "seek_data": true, 00:35:24.272 "copy": false, 00:35:24.272 "nvme_iov_md": false 00:35:24.272 }, 00:35:24.272 "driver_specific": { 00:35:24.272 "lvol": { 00:35:24.272 "lvol_store_uuid": "51cd809f-b657-49b6-9753-70061678c261", 00:35:24.272 "base_bdev": "nvme0n1", 00:35:24.272 "thin_provision": true, 00:35:24.272 "num_allocated_clusters": 0, 00:35:24.272 "snapshot": false, 00:35:24.272 "clone": false, 00:35:24.272 "esnap_clone": false 00:35:24.272 } 00:35:24.272 } 00:35:24.272 } 00:35:24.272 ]' 00:35:24.272 20:33:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:35:24.272 20:33:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:35:24.272 20:33:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:35:24.272 20:33:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:35:24.272 20:33:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:35:24.272 20:33:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:35:24.272 20:33:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:35:24.272 20:33:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d fe1c148c-a22e-47cb-a207-65f9314a9ef5 --l2p_dram_limit 10' 00:35:24.272 20:33:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:35:24.272 20:33:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:35:24.272 20:33:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:35:24.272 20:33:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fe1c148c-a22e-47cb-a207-65f9314a9ef5 --l2p_dram_limit 10 -c nvc0n1p0 00:35:24.531 [2024-10-01 20:33:19.611754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:24.531 [2024-10-01 20:33:19.611805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:24.531 [2024-10-01 20:33:19.611818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:24.531 [2024-10-01 20:33:19.611825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.531 [2024-10-01 20:33:19.611872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:24.531 [2024-10-01 20:33:19.611881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:24.531 [2024-10-01 20:33:19.611888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:35:24.531 [2024-10-01 20:33:19.611894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.531 [2024-10-01 20:33:19.611918] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:24.531 [2024-10-01 20:33:19.612546] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:24.531 [2024-10-01 20:33:19.612571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:24.531 [2024-10-01 20:33:19.612579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:24.531 [2024-10-01 20:33:19.612587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:35:24.531 [2024-10-01 20:33:19.612593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.531 [2024-10-01 20:33:19.612720] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2cbc39b4-488a-417a-8a44-dbbd61f528fe 00:35:24.531 [2024-10-01 20:33:19.613772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:24.531 [2024-10-01 20:33:19.613798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:35:24.531 [2024-10-01 20:33:19.613806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:35:24.531 [2024-10-01 20:33:19.613815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.531 [2024-10-01 20:33:19.618657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:24.531 [2024-10-01 20:33:19.618688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:24.531 [2024-10-01 20:33:19.618708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.810 ms 00:35:24.531 [2024-10-01 20:33:19.618717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.531 [2024-10-01 20:33:19.618789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:24.531 [2024-10-01 20:33:19.618798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:24.531 [2024-10-01 20:33:19.618806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:35:24.531 [2024-10-01 20:33:19.618816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.531 [2024-10-01 20:33:19.618857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:24.531 [2024-10-01 20:33:19.618866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:24.531 [2024-10-01 20:33:19.618872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:24.531 [2024-10-01 20:33:19.618879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.531 [2024-10-01 20:33:19.618897] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:24.531 [2024-10-01 20:33:19.621893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:24.531 [2024-10-01 20:33:19.621920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:24.531 [2024-10-01 20:33:19.621931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:35:24.531 [2024-10-01 20:33:19.621937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.531 [2024-10-01 20:33:19.621966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:24.531 [2024-10-01 20:33:19.621972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:24.531 [2024-10-01 20:33:19.621982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:24.531 [2024-10-01 20:33:19.621987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.531 [2024-10-01 20:33:19.622009] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:35:24.531 [2024-10-01 20:33:19.622114] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:24.531 [2024-10-01 20:33:19.622127] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:24.531 [2024-10-01 20:33:19.622138] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:24.531 [2024-10-01 20:33:19.622148] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:24.531 [2024-10-01 20:33:19.622155] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:24.531 [2024-10-01 20:33:19.622163] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:24.531 [2024-10-01 20:33:19.622169] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:24.531 [2024-10-01 20:33:19.622176] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:24.531 [2024-10-01 20:33:19.622181] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:24.531 [2024-10-01 20:33:19.622189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:24.531 [2024-10-01 20:33:19.622200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:24.531 [2024-10-01 20:33:19.622207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:35:24.531 [2024-10-01 20:33:19.622216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.531 [2024-10-01 20:33:19.622282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:24.531 [2024-10-01 20:33:19.622289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:24.531 [2024-10-01 20:33:19.622296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:35:24.531 [2024-10-01 20:33:19.622301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.531 [2024-10-01 20:33:19.622378] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:24.531 [2024-10-01 20:33:19.622391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:24.531 [2024-10-01 20:33:19.622399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:24.531 [2024-10-01 20:33:19.622405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:24.531 [2024-10-01 20:33:19.622412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:24.531 [2024-10-01 20:33:19.622418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:24.531 [2024-10-01 20:33:19.622425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:24.531 [2024-10-01 20:33:19.622430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:24.531 [2024-10-01 20:33:19.622437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:24.531 [2024-10-01 20:33:19.622442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:24.531 [2024-10-01 20:33:19.622449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:24.531 [2024-10-01 20:33:19.622454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:24.531 [2024-10-01 20:33:19.622460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:24.531 [2024-10-01 20:33:19.622467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:24.531 [2024-10-01 20:33:19.622474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:24.531 [2024-10-01 20:33:19.622479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:24.531 [2024-10-01 20:33:19.622504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:24.531 [2024-10-01 20:33:19.622510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:24.531 [2024-10-01 20:33:19.622516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:24.532 [2024-10-01 20:33:19.622521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:24.532 [2024-10-01 20:33:19.622529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:24.532 [2024-10-01 20:33:19.622535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:24.532 [2024-10-01 20:33:19.622542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:24.532 [2024-10-01 20:33:19.622547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:24.532 [2024-10-01 20:33:19.622553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:24.532 [2024-10-01 20:33:19.622558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:24.532 [2024-10-01 20:33:19.622564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:24.532 [2024-10-01 20:33:19.622569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:24.532 [2024-10-01 20:33:19.622575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:24.532 [2024-10-01 20:33:19.622580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:24.532 [2024-10-01 20:33:19.622587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:24.532 [2024-10-01 20:33:19.622592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:24.532 [2024-10-01 20:33:19.622600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:24.532 [2024-10-01 20:33:19.622605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:24.532 [2024-10-01 20:33:19.622611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:24.532 [2024-10-01 20:33:19.622616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:24.532 [2024-10-01 20:33:19.622623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:24.532 [2024-10-01 20:33:19.622628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:24.532 [2024-10-01 20:33:19.622634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:24.532 [2024-10-01 20:33:19.622639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:24.532 [2024-10-01 20:33:19.622646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:24.532 [2024-10-01 20:33:19.622650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:24.532 [2024-10-01 20:33:19.622657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:24.532 [2024-10-01 20:33:19.622661] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:24.532 [2024-10-01 20:33:19.622671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:24.532 [2024-10-01 20:33:19.622677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:24.532 [2024-10-01 20:33:19.622685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:24.532 [2024-10-01 20:33:19.622704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:24.532 [2024-10-01 20:33:19.622713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:24.532 [2024-10-01 20:33:19.622718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:24.532 [2024-10-01 20:33:19.622725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:24.532 [2024-10-01 20:33:19.622730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:24.532 [2024-10-01 20:33:19.622737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:24.532 [2024-10-01 20:33:19.622746] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:24.532 [2024-10-01 20:33:19.622754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:24.532 [2024-10-01 20:33:19.622761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:24.532 [2024-10-01 20:33:19.622769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:24.532 [2024-10-01 20:33:19.622774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:24.532 [2024-10-01 20:33:19.622781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:24.532 [2024-10-01 20:33:19.622787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:24.532 [2024-10-01 20:33:19.622793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:24.532 [2024-10-01 20:33:19.622799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:24.532 [2024-10-01 20:33:19.622806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:24.532 [2024-10-01 20:33:19.622811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:24.532 [2024-10-01 20:33:19.622819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:24.532 [2024-10-01 20:33:19.622825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:24.532 [2024-10-01 20:33:19.622832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:24.532 [2024-10-01 20:33:19.622837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:24.532 [2024-10-01 20:33:19.622844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:24.532 [2024-10-01 20:33:19.622850] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:24.532 [2024-10-01 20:33:19.622858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:24.532 [2024-10-01 20:33:19.622864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:24.532 [2024-10-01 20:33:19.622871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:24.532 [2024-10-01 20:33:19.622876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:24.532 [2024-10-01 20:33:19.622883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:24.532 [2024-10-01 20:33:19.622889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:24.532 [2024-10-01 20:33:19.622898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:24.532 [2024-10-01 20:33:19.622905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:35:24.532 [2024-10-01 20:33:19.622914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.532 [2024-10-01 20:33:19.622960] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:35:24.532 [2024-10-01 20:33:19.622971] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:35:27.058 [2024-10-01 20:33:21.814448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.058 [2024-10-01 20:33:21.814514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:35:27.058 [2024-10-01 20:33:21.814529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2191.483 ms 00:35:27.058 [2024-10-01 20:33:21.814539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.058 [2024-10-01 20:33:21.840091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.058 [2024-10-01 20:33:21.840142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:27.058 [2024-10-01 20:33:21.840158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.332 ms 00:35:27.058 [2024-10-01 20:33:21.840168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.058 [2024-10-01 20:33:21.840295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.058 [2024-10-01 20:33:21.840309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:27.058 [2024-10-01 20:33:21.840321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:35:27.058 [2024-10-01 20:33:21.840335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.058 [2024-10-01 20:33:21.870860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.058 [2024-10-01 20:33:21.870906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:27.058 [2024-10-01 20:33:21.870917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.489 ms 00:35:27.058 [2024-10-01 20:33:21.870928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.058 [2024-10-01 20:33:21.870960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.058 [2024-10-01 20:33:21.870970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:27.058 [2024-10-01 20:33:21.870978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:35:27.058 [2024-10-01 20:33:21.870992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.058 [2024-10-01 20:33:21.871421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.058 [2024-10-01 20:33:21.871451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:27.058 [2024-10-01 20:33:21.871460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:35:27.058 [2024-10-01 20:33:21.871469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.058 [2024-10-01 20:33:21.871581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.058 [2024-10-01 20:33:21.871592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:27.058 [2024-10-01 20:33:21.871600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:35:27.058 [2024-10-01 20:33:21.871611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.058 [2024-10-01 20:33:21.885453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.058 [2024-10-01 20:33:21.885489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:27.058 [2024-10-01 20:33:21.885498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.824 ms 00:35:27.058 [2024-10-01 20:33:21.885507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.058 [2024-10-01 20:33:21.896987] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:27.058 [2024-10-01 20:33:21.900012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.058 [2024-10-01 20:33:21.900044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:27.058 [2024-10-01 20:33:21.900057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.431 ms 00:35:27.058 [2024-10-01 20:33:21.900066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.058 [2024-10-01 20:33:21.953485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.058 [2024-10-01 20:33:21.953552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:35:27.058 [2024-10-01 20:33:21.953566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.388 ms 00:35:27.058 [2024-10-01 20:33:21.953575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.058 [2024-10-01 20:33:21.953768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.058 [2024-10-01 20:33:21.953779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:27.058 [2024-10-01 20:33:21.953792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:35:27.058 [2024-10-01 20:33:21.953800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.059 [2024-10-01 20:33:21.976892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.059 [2024-10-01 20:33:21.976933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:35:27.059 [2024-10-01 20:33:21.976946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.046 ms 00:35:27.059 [2024-10-01 20:33:21.976955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.059 [2024-10-01 20:33:21.998889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.059 [2024-10-01 20:33:21.998924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:35:27.059 [2024-10-01 20:33:21.998937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.894 ms 00:35:27.059 [2024-10-01 20:33:21.998945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.059 [2024-10-01 20:33:21.999516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.059 [2024-10-01 20:33:21.999536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:27.059 [2024-10-01 20:33:21.999547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:35:27.059 [2024-10-01 20:33:21.999555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.059 [2024-10-01 20:33:22.064143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.059 [2024-10-01 20:33:22.064191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:35:27.059 [2024-10-01 20:33:22.064211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.551 ms 00:35:27.059 [2024-10-01 20:33:22.064219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.059 [2024-10-01 20:33:22.088399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.059 [2024-10-01 20:33:22.088444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:35:27.059 [2024-10-01 20:33:22.088458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.100 ms 00:35:27.059 [2024-10-01 20:33:22.088466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.059 [2024-10-01 20:33:22.112455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.059 [2024-10-01 20:33:22.112495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:35:27.059 [2024-10-01 20:33:22.112507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.947 ms 00:35:27.059 [2024-10-01 20:33:22.112515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.059 [2024-10-01 20:33:22.135176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.059 [2024-10-01 20:33:22.135213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:27.059 [2024-10-01 20:33:22.135226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.622 ms 00:35:27.059 [2024-10-01 20:33:22.135234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.059 [2024-10-01 20:33:22.135275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.059 [2024-10-01 20:33:22.135287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:27.059 [2024-10-01 20:33:22.135300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:27.059 [2024-10-01 20:33:22.135308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.059 [2024-10-01 20:33:22.135384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.059 [2024-10-01 20:33:22.135393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:27.059 [2024-10-01 20:33:22.135403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:35:27.059 [2024-10-01 20:33:22.135410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.059 [2024-10-01 20:33:22.136339] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2524.188 ms, result 0 00:35:27.059 { 00:35:27.059 "name": "ftl0", 00:35:27.059 "uuid": "2cbc39b4-488a-417a-8a44-dbbd61f528fe" 00:35:27.059 } 00:35:27.059 20:33:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:35:27.059 20:33:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:35:27.317 20:33:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:35:27.318 20:33:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:35:27.318 20:33:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:35:27.575 /dev/nbd0 00:35:27.575 20:33:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:35:27.575 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:35:27.575 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:35:27.575 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:27.575 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:27.575 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:35:27.575 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:35:27.575 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:27.576 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:27.576 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:35:27.576 1+0 records in 00:35:27.576 1+0 records out 00:35:27.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170704 s, 24.0 MB/s 00:35:27.576 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:35:27.576 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:35:27.576 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:35:27.576 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:27.576 20:33:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:35:27.576 20:33:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:35:27.576 [2024-10-01 20:33:22.746151] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:35:27.576 [2024-10-01 20:33:22.746264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76778 ] 00:35:27.834 [2024-10-01 20:33:22.894649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.091 [2024-10-01 20:33:23.079920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.136  Copying: 191/1024 [MB] (191 MBps) Copying: 388/1024 [MB] (197 MBps) Copying: 597/1024 [MB] (208 MBps) Copying: 849/1024 [MB] (252 MBps) Copying: 1024/1024 [MB] (average 218 MBps) 00:35:34.136 00:35:34.136 20:33:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:36.034 20:33:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:35:36.034 [2024-10-01 20:33:30.889565] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:35:36.034 [2024-10-01 20:33:30.889685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76865 ] 00:35:36.034 [2024-10-01 20:33:31.036457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.034 [2024-10-01 20:33:31.197034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.169  Copying: 29/1024 [MB] (29 MBps) Copying: 58/1024 [MB] (28 MBps) Copying: 87/1024 [MB] (29 MBps) Copying: 116/1024 [MB] (28 MBps) Copying: 143/1024 [MB] (27 MBps) Copying: 172/1024 [MB] (28 MBps) Copying: 201/1024 [MB] (29 MBps) Copying: 230/1024 [MB] (28 MBps) Copying: 259/1024 [MB] (28 MBps) Copying: 286/1024 [MB] (27 MBps) Copying: 315/1024 [MB] (28 MBps) Copying: 345/1024 [MB] (30 MBps) Copying: 374/1024 [MB] (29 MBps) Copying: 404/1024 [MB] (30 MBps) Copying: 434/1024 [MB] (29 MBps) Copying: 466/1024 [MB] (32 MBps) Copying: 495/1024 [MB] (29 MBps) Copying: 524/1024 [MB] (29 MBps) Copying: 553/1024 [MB] (29 MBps) Copying: 584/1024 [MB] (30 MBps) Copying: 613/1024 [MB] (29 MBps) Copying: 645/1024 [MB] (32 MBps) Copying: 681/1024 [MB] (35 MBps) Copying: 717/1024 [MB] (35 MBps) Copying: 747/1024 [MB] (29 MBps) Copying: 777/1024 [MB] (29 MBps) Copying: 812/1024 [MB] (34 MBps) Copying: 846/1024 [MB] (34 MBps) Copying: 874/1024 [MB] (28 MBps) Copying: 904/1024 [MB] (30 MBps) Copying: 933/1024 [MB] (28 MBps) Copying: 962/1024 [MB] (29 MBps) Copying: 996/1024 [MB] (33 MBps) Copying: 1024/1024 [MB] (average 30 MBps) 00:36:11.169 00:36:11.425 20:34:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:36:11.425 20:34:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:36:11.425 20:34:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:36:11.683 [2024-10-01 20:34:06.821938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.683 [2024-10-01 20:34:06.821987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:11.683 [2024-10-01 20:34:06.822001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:11.683 [2024-10-01 20:34:06.822011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.683 [2024-10-01 20:34:06.822034] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:11.683 [2024-10-01 20:34:06.824632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.683 [2024-10-01 20:34:06.824664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:11.683 [2024-10-01 20:34:06.824677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:36:11.683 [2024-10-01 20:34:06.824685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.683 [2024-10-01 20:34:06.826508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.683 [2024-10-01 20:34:06.826540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:11.683 [2024-10-01 20:34:06.826551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.781 ms 00:36:11.683 [2024-10-01 20:34:06.826559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.683 [2024-10-01 20:34:06.840542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.683 [2024-10-01 20:34:06.840576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:11.683 [2024-10-01 20:34:06.840588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.962 ms 00:36:11.683 [2024-10-01 20:34:06.840595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.683 [2024-10-01 20:34:06.846752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.683 [2024-10-01 20:34:06.846783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:11.683 [2024-10-01 20:34:06.846795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.124 ms 00:36:11.683 [2024-10-01 20:34:06.846804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.683 [2024-10-01 20:34:06.869991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.683 [2024-10-01 20:34:06.870023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:11.683 [2024-10-01 20:34:06.870035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.119 ms 00:36:11.683 [2024-10-01 20:34:06.870043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.683 [2024-10-01 20:34:06.883989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.684 [2024-10-01 20:34:06.884023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:11.684 [2024-10-01 20:34:06.884037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.906 ms 00:36:11.684 [2024-10-01 20:34:06.884045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.684 [2024-10-01 20:34:06.884189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.684 [2024-10-01 20:34:06.884201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:11.684 [2024-10-01 20:34:06.884212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:36:11.684 [2024-10-01 20:34:06.884221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.943 [2024-10-01 20:34:06.906598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.943 [2024-10-01 20:34:06.906629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:11.943 [2024-10-01 20:34:06.906641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.357 ms 00:36:11.943 [2024-10-01 20:34:06.906648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.943 [2024-10-01 20:34:06.929224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.943 [2024-10-01 20:34:06.929254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:11.943 [2024-10-01 20:34:06.929265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.539 ms 00:36:11.943 [2024-10-01 20:34:06.929273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.943 [2024-10-01 20:34:06.951325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.943 [2024-10-01 20:34:06.951359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:11.943 [2024-10-01 20:34:06.951371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.014 ms 00:36:11.943 [2024-10-01 20:34:06.951378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.943 [2024-10-01 20:34:06.973646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.943 [2024-10-01 20:34:06.973677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:11.943 [2024-10-01 20:34:06.973689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.198 ms 00:36:11.943 [2024-10-01 20:34:06.973705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.943 [2024-10-01 20:34:06.973739] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:11.943 [2024-10-01 20:34:06.973753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:11.943 [2024-10-01 20:34:06.973995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:11.944 [2024-10-01 20:34:06.974612] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:11.944 [2024-10-01 20:34:06.974621] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2cbc39b4-488a-417a-8a44-dbbd61f528fe 00:36:11.944 [2024-10-01 20:34:06.974629] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:36:11.944 [2024-10-01 20:34:06.974639] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:36:11.944 [2024-10-01 20:34:06.974646] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:11.944 [2024-10-01 20:34:06.974654] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:11.944 [2024-10-01 20:34:06.974661] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:11.944 [2024-10-01 20:34:06.974670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:11.944 [2024-10-01 20:34:06.974677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:11.944 [2024-10-01 20:34:06.974684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:11.944 [2024-10-01 20:34:06.974699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:11.944 [2024-10-01 20:34:06.974708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.944 [2024-10-01 20:34:06.974715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:11.944 [2024-10-01 20:34:06.974724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:36:11.944 [2024-10-01 20:34:06.974733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.944 [2024-10-01 20:34:06.987197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.944 [2024-10-01 20:34:06.987228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:11.944 [2024-10-01 20:34:06.987240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.432 ms 00:36:11.944 [2024-10-01 20:34:06.987248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.944 [2024-10-01 20:34:06.987597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.944 [2024-10-01 20:34:06.987614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:11.944 [2024-10-01 20:34:06.987624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:36:11.944 [2024-10-01 20:34:06.987631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.944 [2024-10-01 20:34:07.025235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.945 [2024-10-01 20:34:07.025272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:11.945 [2024-10-01 20:34:07.025284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.945 [2024-10-01 20:34:07.025292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.945 [2024-10-01 20:34:07.025350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.945 [2024-10-01 20:34:07.025361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:11.945 [2024-10-01 20:34:07.025372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.945 [2024-10-01 20:34:07.025379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.945 [2024-10-01 20:34:07.025460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.945 [2024-10-01 20:34:07.025470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:11.945 [2024-10-01 20:34:07.025480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.945 [2024-10-01 20:34:07.025486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.945 [2024-10-01 20:34:07.025506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.945 [2024-10-01 20:34:07.025514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:11.945 [2024-10-01 20:34:07.025523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.945 [2024-10-01 20:34:07.025531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.945 [2024-10-01 20:34:07.101514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.945 [2024-10-01 20:34:07.101558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:11.945 [2024-10-01 20:34:07.101571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.945 [2024-10-01 20:34:07.101579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.203 [2024-10-01 20:34:07.163705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:12.203 [2024-10-01 20:34:07.163751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:12.203 [2024-10-01 20:34:07.163765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:12.203 [2024-10-01 20:34:07.163773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.203 [2024-10-01 20:34:07.163843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:12.203 [2024-10-01 20:34:07.163852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:12.203 [2024-10-01 20:34:07.163862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:12.203 [2024-10-01 20:34:07.163870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.203 [2024-10-01 20:34:07.163930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:12.203 [2024-10-01 20:34:07.163939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:12.203 [2024-10-01 20:34:07.163949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:12.203 [2024-10-01 20:34:07.163956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.203 [2024-10-01 20:34:07.164047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:12.203 [2024-10-01 20:34:07.164056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:12.203 [2024-10-01 20:34:07.164066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:12.203 [2024-10-01 20:34:07.164073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.203 [2024-10-01 20:34:07.164104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:12.203 [2024-10-01 20:34:07.164113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:12.203 [2024-10-01 20:34:07.164122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:12.203 [2024-10-01 20:34:07.164129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.203 [2024-10-01 20:34:07.164166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:12.203 [2024-10-01 20:34:07.164174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:12.203 [2024-10-01 20:34:07.164184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:12.203 [2024-10-01 20:34:07.164191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.203 [2024-10-01 20:34:07.164233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:12.203 [2024-10-01 20:34:07.164242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:12.203 [2024-10-01 20:34:07.164252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:12.203 [2024-10-01 20:34:07.164258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.203 [2024-10-01 20:34:07.164382] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.414 ms, result 0 00:36:12.203 true 00:36:12.203 20:34:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 76641 00:36:12.203 20:34:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76641 00:36:12.203 20:34:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:36:12.203 [2024-10-01 20:34:07.247774] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:36:12.203 [2024-10-01 20:34:07.247889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77255 ] 00:36:12.204 [2024-10-01 20:34:07.388597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.461 [2024-10-01 20:34:07.568197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.083  Copying: 218/1024 [MB] (218 MBps) Copying: 474/1024 [MB] (255 MBps) Copying: 730/1024 [MB] (256 MBps) Copying: 984/1024 [MB] (253 MBps) Copying: 1024/1024 [MB] (average 245 MBps) 00:36:18.083 00:36:18.083 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76641 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:36:18.083 20:34:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:18.083 [2024-10-01 20:34:13.219464] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:36:18.083 [2024-10-01 20:34:13.219585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77319 ] 00:36:18.340 [2024-10-01 20:34:13.369204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.597 [2024-10-01 20:34:13.558186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.854 [2024-10-01 20:34:13.997165] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:18.854 [2024-10-01 20:34:13.997365] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:18.854 [2024-10-01 20:34:14.061133] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:36:18.854 [2024-10-01 20:34:14.061466] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:36:18.854 [2024-10-01 20:34:14.061709] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:36:19.112 [2024-10-01 20:34:14.236346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.112 [2024-10-01 20:34:14.236720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:19.112 [2024-10-01 20:34:14.236795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:19.112 [2024-10-01 20:34:14.236842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.112 [2024-10-01 20:34:14.236944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.112 [2024-10-01 20:34:14.236999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:19.112 [2024-10-01 20:34:14.237043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:36:19.112 [2024-10-01 20:34:14.237088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.112 [2024-10-01 20:34:14.237140] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:19.112 [2024-10-01 20:34:14.237835] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:19.112 [2024-10-01 20:34:14.237923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.112 [2024-10-01 20:34:14.237965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:19.112 [2024-10-01 20:34:14.238009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:36:19.112 [2024-10-01 20:34:14.238047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.112 [2024-10-01 20:34:14.239212] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:36:19.112 [2024-10-01 20:34:14.251838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.112 [2024-10-01 20:34:14.251962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:36:19.112 [2024-10-01 20:34:14.252014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.628 ms 00:36:19.112 [2024-10-01 20:34:14.252056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.112 [2024-10-01 20:34:14.252139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.112 [2024-10-01 20:34:14.252184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:36:19.112 [2024-10-01 20:34:14.252230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:36:19.112 [2024-10-01 20:34:14.252267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.112 [2024-10-01 20:34:14.258500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.112 [2024-10-01 20:34:14.258583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:19.112 [2024-10-01 20:34:14.258632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.158 ms 00:36:19.112 [2024-10-01 20:34:14.258672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.112 [2024-10-01 20:34:14.258789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.112 [2024-10-01 20:34:14.258830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:19.112 [2024-10-01 20:34:14.258872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:36:19.112 [2024-10-01 20:34:14.258918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.112 [2024-10-01 20:34:14.258997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.112 [2024-10-01 20:34:14.259047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:19.112 [2024-10-01 20:34:14.259090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:19.112 [2024-10-01 20:34:14.259140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.112 [2024-10-01 20:34:14.259189] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:19.112 [2024-10-01 20:34:14.262701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.112 [2024-10-01 20:34:14.262797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:19.112 [2024-10-01 20:34:14.262843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.518 ms 00:36:19.112 [2024-10-01 20:34:14.262878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.112 [2024-10-01 20:34:14.262941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.112 [2024-10-01 20:34:14.262976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:19.112 [2024-10-01 20:34:14.263018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:36:19.113 [2024-10-01 20:34:14.263056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.113 [2024-10-01 20:34:14.263109] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:36:19.113 [2024-10-01 20:34:14.263177] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:36:19.113 [2024-10-01 20:34:14.263252] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:36:19.113 [2024-10-01 20:34:14.263301] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:36:19.113 [2024-10-01 20:34:14.263406] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:19.113 [2024-10-01 20:34:14.263423] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:19.113 [2024-10-01 20:34:14.263433] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:19.113 [2024-10-01 20:34:14.263443] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:19.113 [2024-10-01 20:34:14.263452] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:19.113 [2024-10-01 20:34:14.263460] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:19.113 [2024-10-01 20:34:14.263467] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:19.113 [2024-10-01 20:34:14.263475] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:19.113 [2024-10-01 20:34:14.263481] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:19.113 [2024-10-01 20:34:14.263489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.113 [2024-10-01 20:34:14.263498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:19.113 [2024-10-01 20:34:14.263506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:36:19.113 [2024-10-01 20:34:14.263512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.113 [2024-10-01 20:34:14.263599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.113 [2024-10-01 20:34:14.263613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:19.113 [2024-10-01 20:34:14.263621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:36:19.113 [2024-10-01 20:34:14.263628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.113 [2024-10-01 20:34:14.263748] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:19.113 [2024-10-01 20:34:14.263764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:19.113 [2024-10-01 20:34:14.263775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:19.113 [2024-10-01 20:34:14.263783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:19.113 [2024-10-01 20:34:14.263790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:19.113 [2024-10-01 20:34:14.263797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:19.113 [2024-10-01 20:34:14.263804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:19.113 [2024-10-01 20:34:14.263811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:19.113 [2024-10-01 20:34:14.263818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:19.113 [2024-10-01 20:34:14.263830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:19.113 [2024-10-01 20:34:14.263836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:19.113 [2024-10-01 20:34:14.263843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:19.113 [2024-10-01 20:34:14.263850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:19.113 [2024-10-01 20:34:14.263856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:19.113 [2024-10-01 20:34:14.263863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:19.113 [2024-10-01 20:34:14.263870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:19.113 [2024-10-01 20:34:14.263877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:19.113 [2024-10-01 20:34:14.263883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:19.113 [2024-10-01 20:34:14.263890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:19.113 [2024-10-01 20:34:14.263897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:19.113 [2024-10-01 20:34:14.263903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:19.113 [2024-10-01 20:34:14.263910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:19.113 [2024-10-01 20:34:14.263917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:19.113 [2024-10-01 20:34:14.263923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:19.113 [2024-10-01 20:34:14.263929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:19.113 [2024-10-01 20:34:14.263936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:19.113 [2024-10-01 20:34:14.263942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:19.113 [2024-10-01 20:34:14.263948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:19.113 [2024-10-01 20:34:14.263954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:19.113 [2024-10-01 20:34:14.263961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:19.113 [2024-10-01 20:34:14.263967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:19.113 [2024-10-01 20:34:14.263974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:19.113 [2024-10-01 20:34:14.263980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:19.113 [2024-10-01 20:34:14.263986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:19.113 [2024-10-01 20:34:14.263992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:19.113 [2024-10-01 20:34:14.263998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:19.113 [2024-10-01 20:34:14.264005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:19.113 [2024-10-01 20:34:14.264011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:19.113 [2024-10-01 20:34:14.264018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:19.113 [2024-10-01 20:34:14.264024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:19.113 [2024-10-01 20:34:14.264030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:19.113 [2024-10-01 20:34:14.264037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:19.113 [2024-10-01 20:34:14.264043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:19.113 [2024-10-01 20:34:14.264049] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:19.113 [2024-10-01 20:34:14.264057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:19.113 [2024-10-01 20:34:14.264064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:19.113 [2024-10-01 20:34:14.264071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:19.113 [2024-10-01 20:34:14.264079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:19.113 [2024-10-01 20:34:14.264086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:19.113 [2024-10-01 20:34:14.264092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:19.113 [2024-10-01 20:34:14.264099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:19.113 [2024-10-01 20:34:14.264105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:19.113 [2024-10-01 20:34:14.264112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:19.113 [2024-10-01 20:34:14.264120] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:19.113 [2024-10-01 20:34:14.264129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:19.113 [2024-10-01 20:34:14.264137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:19.113 [2024-10-01 20:34:14.264144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:19.113 [2024-10-01 20:34:14.264152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:19.113 [2024-10-01 20:34:14.264159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:19.113 [2024-10-01 20:34:14.264166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:19.113 [2024-10-01 20:34:14.264173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:19.113 [2024-10-01 20:34:14.264179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:19.113 [2024-10-01 20:34:14.264186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:19.113 [2024-10-01 20:34:14.264193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:19.113 [2024-10-01 20:34:14.264200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:19.113 [2024-10-01 20:34:14.264207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:19.114 [2024-10-01 20:34:14.264213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:19.114 [2024-10-01 20:34:14.264220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:19.114 [2024-10-01 20:34:14.264227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:19.114 [2024-10-01 20:34:14.264235] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:19.114 [2024-10-01 20:34:14.264242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:19.114 [2024-10-01 20:34:14.264252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:19.114 [2024-10-01 20:34:14.264259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:19.114 [2024-10-01 20:34:14.264266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:19.114 [2024-10-01 20:34:14.264274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:19.114 [2024-10-01 20:34:14.264281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.114 [2024-10-01 20:34:14.264288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:19.114 [2024-10-01 20:34:14.264295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:36:19.114 [2024-10-01 20:34:14.264302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.114 [2024-10-01 20:34:14.291090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.114 [2024-10-01 20:34:14.291129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:19.114 [2024-10-01 20:34:14.291141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.742 ms 00:36:19.114 [2024-10-01 20:34:14.291149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.114 [2024-10-01 20:34:14.291234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.114 [2024-10-01 20:34:14.291243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:19.114 [2024-10-01 20:34:14.291251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:36:19.114 [2024-10-01 20:34:14.291258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.323396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.372 [2024-10-01 20:34:14.323432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:19.372 [2024-10-01 20:34:14.323443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.077 ms 00:36:19.372 [2024-10-01 20:34:14.323450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.323490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.372 [2024-10-01 20:34:14.323498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:19.372 [2024-10-01 20:34:14.323505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:36:19.372 [2024-10-01 20:34:14.323513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.323885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.372 [2024-10-01 20:34:14.323902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:19.372 [2024-10-01 20:34:14.323910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:36:19.372 [2024-10-01 20:34:14.323918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.324037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.372 [2024-10-01 20:34:14.324045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:19.372 [2024-10-01 20:34:14.324053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:36:19.372 [2024-10-01 20:34:14.324060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.337227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.372 [2024-10-01 20:34:14.337258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:19.372 [2024-10-01 20:34:14.337268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.147 ms 00:36:19.372 [2024-10-01 20:34:14.337275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.349442] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:19.372 [2024-10-01 20:34:14.349476] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:36:19.372 [2024-10-01 20:34:14.349489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.372 [2024-10-01 20:34:14.349497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:36:19.372 [2024-10-01 20:34:14.349506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.096 ms 00:36:19.372 [2024-10-01 20:34:14.349514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.373698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.372 [2024-10-01 20:34:14.373741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:36:19.372 [2024-10-01 20:34:14.373757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.136 ms 00:36:19.372 [2024-10-01 20:34:14.373765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.385260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.372 [2024-10-01 20:34:14.385289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:36:19.372 [2024-10-01 20:34:14.385299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.463 ms 00:36:19.372 [2024-10-01 20:34:14.385307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.396790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.372 [2024-10-01 20:34:14.396817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:36:19.372 [2024-10-01 20:34:14.396827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.445 ms 00:36:19.372 [2024-10-01 20:34:14.396835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.397464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.372 [2024-10-01 20:34:14.397477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:19.372 [2024-10-01 20:34:14.397485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:36:19.372 [2024-10-01 20:34:14.397492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.452837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.372 [2024-10-01 20:34:14.452883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:36:19.372 [2024-10-01 20:34:14.452897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.319 ms 00:36:19.372 [2024-10-01 20:34:14.452905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.464037] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:19.372 [2024-10-01 20:34:14.466808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.372 [2024-10-01 20:34:14.466833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:19.372 [2024-10-01 20:34:14.466846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.850 ms 00:36:19.372 [2024-10-01 20:34:14.466855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.466948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.372 [2024-10-01 20:34:14.466959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:36:19.372 [2024-10-01 20:34:14.466967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:36:19.372 [2024-10-01 20:34:14.466974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.372 [2024-10-01 20:34:14.467040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.373 [2024-10-01 20:34:14.467050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:19.373 [2024-10-01 20:34:14.467058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:36:19.373 [2024-10-01 20:34:14.467065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.373 [2024-10-01 20:34:14.467083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.373 [2024-10-01 20:34:14.467091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:19.373 [2024-10-01 20:34:14.467099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:19.373 [2024-10-01 20:34:14.467107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.373 [2024-10-01 20:34:14.467137] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:36:19.373 [2024-10-01 20:34:14.467146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.373 [2024-10-01 20:34:14.467156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:36:19.373 [2024-10-01 20:34:14.467163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:36:19.373 [2024-10-01 20:34:14.467170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.373 [2024-10-01 20:34:14.490055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.373 [2024-10-01 20:34:14.490091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:19.373 [2024-10-01 20:34:14.490103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.867 ms 00:36:19.373 [2024-10-01 20:34:14.490111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.373 [2024-10-01 20:34:14.490189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.373 [2024-10-01 20:34:14.490198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:19.373 [2024-10-01 20:34:14.490207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:36:19.373 [2024-10-01 20:34:14.490215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.373 [2024-10-01 20:34:14.491205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 254.460 ms, result 0 00:36:43.179  Copying: 46/1024 [MB] (46 MBps) Copying: 97/1024 [MB] (51 MBps) Copying: 143/1024 [MB] (45 MBps) Copying: 187/1024 [MB] (44 MBps) Copying: 231/1024 [MB] (44 MBps) Copying: 277/1024 [MB] (45 MBps) Copying: 325/1024 [MB] (48 MBps) Copying: 372/1024 [MB] (46 MBps) Copying: 418/1024 [MB] (46 MBps) Copying: 464/1024 [MB] (46 MBps) Copying: 510/1024 [MB] (46 MBps) Copying: 556/1024 [MB] (45 MBps) Copying: 602/1024 [MB] (46 MBps) Copying: 646/1024 [MB] (44 MBps) Copying: 692/1024 [MB] (46 MBps) Copying: 738/1024 [MB] (45 MBps) Copying: 769/1024 [MB] (30 MBps) Copying: 815/1024 [MB] (45 MBps) Copying: 860/1024 [MB] (45 MBps) Copying: 909/1024 [MB] (48 MBps) Copying: 954/1024 [MB] (44 MBps) Copying: 995/1024 [MB] (41 MBps) Copying: 1023/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 43 MBps)[2024-10-01 20:34:38.265325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.179 [2024-10-01 20:34:38.265382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:43.179 [2024-10-01 20:34:38.265397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:43.179 [2024-10-01 20:34:38.265407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.179 [2024-10-01 20:34:38.266372] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:43.179 [2024-10-01 20:34:38.271080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.179 [2024-10-01 20:34:38.271116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:43.179 [2024-10-01 20:34:38.271127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.670 ms 00:36:43.179 [2024-10-01 20:34:38.271136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.179 [2024-10-01 20:34:38.283631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.179 [2024-10-01 20:34:38.283680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:43.179 [2024-10-01 20:34:38.283701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.324 ms 00:36:43.179 [2024-10-01 20:34:38.283709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.179 [2024-10-01 20:34:38.302465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.179 [2024-10-01 20:34:38.302514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:43.179 [2024-10-01 20:34:38.302526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.739 ms 00:36:43.179 [2024-10-01 20:34:38.302534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.179 [2024-10-01 20:34:38.308744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.179 [2024-10-01 20:34:38.308776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:43.179 [2024-10-01 20:34:38.308787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.182 ms 00:36:43.179 [2024-10-01 20:34:38.308795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.179 [2024-10-01 20:34:38.332261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.179 [2024-10-01 20:34:38.332303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:43.179 [2024-10-01 20:34:38.332316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.410 ms 00:36:43.179 [2024-10-01 20:34:38.332324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.179 [2024-10-01 20:34:38.346026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.179 [2024-10-01 20:34:38.346072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:43.179 [2024-10-01 20:34:38.346090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.661 ms 00:36:43.179 [2024-10-01 20:34:38.346099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.438 [2024-10-01 20:34:38.399494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.438 [2024-10-01 20:34:38.399544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:43.438 [2024-10-01 20:34:38.399555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.351 ms 00:36:43.438 [2024-10-01 20:34:38.399563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.438 [2024-10-01 20:34:38.422957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.438 [2024-10-01 20:34:38.423002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:43.438 [2024-10-01 20:34:38.423014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.378 ms 00:36:43.438 [2024-10-01 20:34:38.423022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.438 [2024-10-01 20:34:38.445280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.438 [2024-10-01 20:34:38.445336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:43.438 [2024-10-01 20:34:38.445348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.222 ms 00:36:43.438 [2024-10-01 20:34:38.445356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.438 [2024-10-01 20:34:38.467434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.438 [2024-10-01 20:34:38.467474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:43.438 [2024-10-01 20:34:38.467486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.044 ms 00:36:43.438 [2024-10-01 20:34:38.467494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.438 [2024-10-01 20:34:38.489421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.438 [2024-10-01 20:34:38.489461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:43.438 [2024-10-01 20:34:38.489473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.866 ms 00:36:43.438 [2024-10-01 20:34:38.489481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.438 [2024-10-01 20:34:38.489514] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:43.438 [2024-10-01 20:34:38.489528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129024 / 261120 wr_cnt: 1 state: open 00:36:43.438 [2024-10-01 20:34:38.489538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:43.438 [2024-10-01 20:34:38.489547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:43.438 [2024-10-01 20:34:38.489554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:43.438 [2024-10-01 20:34:38.489562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:43.438 [2024-10-01 20:34:38.489569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.489993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:43.439 [2024-10-01 20:34:38.490306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:43.440 [2024-10-01 20:34:38.490314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:43.440 [2024-10-01 20:34:38.490322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:43.440 [2024-10-01 20:34:38.490329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:43.440 [2024-10-01 20:34:38.490337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:43.440 [2024-10-01 20:34:38.490353] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:43.440 [2024-10-01 20:34:38.490361] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2cbc39b4-488a-417a-8a44-dbbd61f528fe 00:36:43.440 [2024-10-01 20:34:38.490369] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129024 00:36:43.440 [2024-10-01 20:34:38.490376] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129984 00:36:43.440 [2024-10-01 20:34:38.490384] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129024 00:36:43.440 [2024-10-01 20:34:38.490392] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:36:43.440 [2024-10-01 20:34:38.490398] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:43.440 [2024-10-01 20:34:38.490406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:43.440 [2024-10-01 20:34:38.490420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:43.440 [2024-10-01 20:34:38.490427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:43.440 [2024-10-01 20:34:38.490433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:43.440 [2024-10-01 20:34:38.490440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.440 [2024-10-01 20:34:38.490450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:43.440 [2024-10-01 20:34:38.490459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:36:43.440 [2024-10-01 20:34:38.490466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.440 [2024-10-01 20:34:38.502981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.440 [2024-10-01 20:34:38.503017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:43.440 [2024-10-01 20:34:38.503029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.499 ms 00:36:43.440 [2024-10-01 20:34:38.503037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.440 [2024-10-01 20:34:38.503398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:43.440 [2024-10-01 20:34:38.503412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:43.440 [2024-10-01 20:34:38.503420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:36:43.440 [2024-10-01 20:34:38.503428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.440 [2024-10-01 20:34:38.531942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:43.440 [2024-10-01 20:34:38.531986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:43.440 [2024-10-01 20:34:38.531998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:43.440 [2024-10-01 20:34:38.532010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.440 [2024-10-01 20:34:38.532077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:43.440 [2024-10-01 20:34:38.532086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:43.440 [2024-10-01 20:34:38.532094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:43.440 [2024-10-01 20:34:38.532102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.440 [2024-10-01 20:34:38.532168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:43.440 [2024-10-01 20:34:38.532179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:43.440 [2024-10-01 20:34:38.532187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:43.440 [2024-10-01 20:34:38.532195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.440 [2024-10-01 20:34:38.532214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:43.440 [2024-10-01 20:34:38.532222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:43.440 [2024-10-01 20:34:38.532230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:43.440 [2024-10-01 20:34:38.532239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.440 [2024-10-01 20:34:38.610961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:43.440 [2024-10-01 20:34:38.611012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:43.440 [2024-10-01 20:34:38.611023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:43.440 [2024-10-01 20:34:38.611031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.698 [2024-10-01 20:34:38.674425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:43.698 [2024-10-01 20:34:38.674481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:43.698 [2024-10-01 20:34:38.674493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:43.698 [2024-10-01 20:34:38.674501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.698 [2024-10-01 20:34:38.674571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:43.698 [2024-10-01 20:34:38.674581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:43.698 [2024-10-01 20:34:38.674589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:43.698 [2024-10-01 20:34:38.674596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.698 [2024-10-01 20:34:38.674631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:43.698 [2024-10-01 20:34:38.674644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:43.698 [2024-10-01 20:34:38.674651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:43.698 [2024-10-01 20:34:38.674659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.698 [2024-10-01 20:34:38.674757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:43.698 [2024-10-01 20:34:38.674767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:43.698 [2024-10-01 20:34:38.674774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:43.698 [2024-10-01 20:34:38.674781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.698 [2024-10-01 20:34:38.674808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:43.698 [2024-10-01 20:34:38.674817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:43.698 [2024-10-01 20:34:38.674829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:43.698 [2024-10-01 20:34:38.674836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.698 [2024-10-01 20:34:38.674869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:43.698 [2024-10-01 20:34:38.674878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:43.698 [2024-10-01 20:34:38.674885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:43.698 [2024-10-01 20:34:38.674892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.698 [2024-10-01 20:34:38.674931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:43.698 [2024-10-01 20:34:38.674943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:43.698 [2024-10-01 20:34:38.674950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:43.698 [2024-10-01 20:34:38.674958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:43.698 [2024-10-01 20:34:38.675062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.529 ms, result 0 00:36:46.985 00:36:46.985 00:36:46.985 20:34:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:36:49.510 20:34:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:49.510 [2024-10-01 20:34:44.269114] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:36:49.510 [2024-10-01 20:34:44.269232] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77628 ] 00:36:49.510 [2024-10-01 20:34:44.419242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:49.510 [2024-10-01 20:34:44.609181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.094 [2024-10-01 20:34:45.053638] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:50.094 [2024-10-01 20:34:45.053719] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:50.094 [2024-10-01 20:34:45.254851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.094 [2024-10-01 20:34:45.254899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:50.094 [2024-10-01 20:34:45.254911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:50.094 [2024-10-01 20:34:45.254921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.094 [2024-10-01 20:34:45.254965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.094 [2024-10-01 20:34:45.254975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:50.094 [2024-10-01 20:34:45.254983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:36:50.094 [2024-10-01 20:34:45.254990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.094 [2024-10-01 20:34:45.255006] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:50.094 [2024-10-01 20:34:45.255651] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:50.094 [2024-10-01 20:34:45.255672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.094 [2024-10-01 20:34:45.255680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:50.094 [2024-10-01 20:34:45.255688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:36:50.094 [2024-10-01 20:34:45.255706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.094 [2024-10-01 20:34:45.256837] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:36:50.094 [2024-10-01 20:34:45.269389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.094 [2024-10-01 20:34:45.269433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:36:50.094 [2024-10-01 20:34:45.269445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.553 ms 00:36:50.094 [2024-10-01 20:34:45.269452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.094 [2024-10-01 20:34:45.269506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.094 [2024-10-01 20:34:45.269516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:36:50.094 [2024-10-01 20:34:45.269524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:36:50.094 [2024-10-01 20:34:45.269531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.094 [2024-10-01 20:34:45.275708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.094 [2024-10-01 20:34:45.275737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:50.094 [2024-10-01 20:34:45.275746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.120 ms 00:36:50.094 [2024-10-01 20:34:45.275753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.094 [2024-10-01 20:34:45.275824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.094 [2024-10-01 20:34:45.275833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:50.094 [2024-10-01 20:34:45.275841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:36:50.094 [2024-10-01 20:34:45.275848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.094 [2024-10-01 20:34:45.275890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.094 [2024-10-01 20:34:45.275899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:50.094 [2024-10-01 20:34:45.275907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:36:50.094 [2024-10-01 20:34:45.275914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.094 [2024-10-01 20:34:45.275934] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:50.094 [2024-10-01 20:34:45.279548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.094 [2024-10-01 20:34:45.279576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:50.094 [2024-10-01 20:34:45.279585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.618 ms 00:36:50.094 [2024-10-01 20:34:45.279592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.094 [2024-10-01 20:34:45.279620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.094 [2024-10-01 20:34:45.279628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:50.094 [2024-10-01 20:34:45.279636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:36:50.094 [2024-10-01 20:34:45.279645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.094 [2024-10-01 20:34:45.279672] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:36:50.094 [2024-10-01 20:34:45.279699] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:36:50.094 [2024-10-01 20:34:45.279734] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:36:50.094 [2024-10-01 20:34:45.279748] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:36:50.094 [2024-10-01 20:34:45.279850] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:50.094 [2024-10-01 20:34:45.279860] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:50.094 [2024-10-01 20:34:45.279873] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:50.094 [2024-10-01 20:34:45.279883] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:50.094 [2024-10-01 20:34:45.279891] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:50.094 [2024-10-01 20:34:45.279899] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:50.094 [2024-10-01 20:34:45.279906] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:50.094 [2024-10-01 20:34:45.279914] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:50.094 [2024-10-01 20:34:45.279920] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:50.094 [2024-10-01 20:34:45.279928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.094 [2024-10-01 20:34:45.279935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:50.094 [2024-10-01 20:34:45.279942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:36:50.094 [2024-10-01 20:34:45.279949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.094 [2024-10-01 20:34:45.280032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.094 [2024-10-01 20:34:45.280040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:50.094 [2024-10-01 20:34:45.280047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:36:50.094 [2024-10-01 20:34:45.280054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.094 [2024-10-01 20:34:45.280164] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:50.094 [2024-10-01 20:34:45.280175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:50.094 [2024-10-01 20:34:45.280183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:50.094 [2024-10-01 20:34:45.280190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:50.094 [2024-10-01 20:34:45.280198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:50.094 [2024-10-01 20:34:45.280205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:50.094 [2024-10-01 20:34:45.280211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:50.094 [2024-10-01 20:34:45.280218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:50.094 [2024-10-01 20:34:45.280224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:50.095 [2024-10-01 20:34:45.280230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:50.095 [2024-10-01 20:34:45.280237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:50.095 [2024-10-01 20:34:45.280243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:50.095 [2024-10-01 20:34:45.280250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:50.095 [2024-10-01 20:34:45.280261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:50.095 [2024-10-01 20:34:45.280268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:50.095 [2024-10-01 20:34:45.280274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:50.095 [2024-10-01 20:34:45.280280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:50.095 [2024-10-01 20:34:45.280286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:50.095 [2024-10-01 20:34:45.280294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:50.095 [2024-10-01 20:34:45.280301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:50.095 [2024-10-01 20:34:45.280307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:50.095 [2024-10-01 20:34:45.280313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:50.095 [2024-10-01 20:34:45.280320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:50.095 [2024-10-01 20:34:45.280326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:50.095 [2024-10-01 20:34:45.280332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:50.095 [2024-10-01 20:34:45.280338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:50.095 [2024-10-01 20:34:45.280345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:50.095 [2024-10-01 20:34:45.280351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:50.095 [2024-10-01 20:34:45.280357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:50.095 [2024-10-01 20:34:45.280364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:50.095 [2024-10-01 20:34:45.280370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:50.095 [2024-10-01 20:34:45.280377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:50.095 [2024-10-01 20:34:45.280383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:50.095 [2024-10-01 20:34:45.280389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:50.095 [2024-10-01 20:34:45.280395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:50.095 [2024-10-01 20:34:45.280402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:50.095 [2024-10-01 20:34:45.280408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:50.095 [2024-10-01 20:34:45.280414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:50.095 [2024-10-01 20:34:45.280421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:50.095 [2024-10-01 20:34:45.280427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:50.095 [2024-10-01 20:34:45.280434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:50.095 [2024-10-01 20:34:45.280440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:50.095 [2024-10-01 20:34:45.280446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:50.095 [2024-10-01 20:34:45.280453] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:50.095 [2024-10-01 20:34:45.280461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:50.095 [2024-10-01 20:34:45.280470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:50.095 [2024-10-01 20:34:45.280476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:50.095 [2024-10-01 20:34:45.280483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:50.095 [2024-10-01 20:34:45.280490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:50.095 [2024-10-01 20:34:45.280496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:50.095 [2024-10-01 20:34:45.280505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:50.095 [2024-10-01 20:34:45.280512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:50.095 [2024-10-01 20:34:45.280518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:50.095 [2024-10-01 20:34:45.280526] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:50.095 [2024-10-01 20:34:45.280536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:50.095 [2024-10-01 20:34:45.280544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:50.095 [2024-10-01 20:34:45.280551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:50.095 [2024-10-01 20:34:45.280558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:50.095 [2024-10-01 20:34:45.280565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:50.095 [2024-10-01 20:34:45.280572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:50.095 [2024-10-01 20:34:45.280578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:50.095 [2024-10-01 20:34:45.280585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:50.095 [2024-10-01 20:34:45.280592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:50.095 [2024-10-01 20:34:45.280598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:50.095 [2024-10-01 20:34:45.280605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:50.095 [2024-10-01 20:34:45.280612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:50.095 [2024-10-01 20:34:45.280619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:50.095 [2024-10-01 20:34:45.280626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:50.095 [2024-10-01 20:34:45.280633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:50.095 [2024-10-01 20:34:45.280640] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:50.095 [2024-10-01 20:34:45.280647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:50.095 [2024-10-01 20:34:45.280655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:50.095 [2024-10-01 20:34:45.280662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:50.095 [2024-10-01 20:34:45.280669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:50.095 [2024-10-01 20:34:45.280676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:50.095 [2024-10-01 20:34:45.280683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.095 [2024-10-01 20:34:45.280700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:50.095 [2024-10-01 20:34:45.280708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:36:50.095 [2024-10-01 20:34:45.280715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.352 [2024-10-01 20:34:45.307164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.352 [2024-10-01 20:34:45.307202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:50.352 [2024-10-01 20:34:45.307213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.406 ms 00:36:50.352 [2024-10-01 20:34:45.307222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.352 [2024-10-01 20:34:45.307304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.352 [2024-10-01 20:34:45.307314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:50.352 [2024-10-01 20:34:45.307322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:36:50.352 [2024-10-01 20:34:45.307331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.352 [2024-10-01 20:34:45.339738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.352 [2024-10-01 20:34:45.339775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:50.352 [2024-10-01 20:34:45.339786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.358 ms 00:36:50.352 [2024-10-01 20:34:45.339793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.352 [2024-10-01 20:34:45.339820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.352 [2024-10-01 20:34:45.339828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:50.352 [2024-10-01 20:34:45.339836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:36:50.352 [2024-10-01 20:34:45.339843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.352 [2024-10-01 20:34:45.340263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.352 [2024-10-01 20:34:45.340278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:50.352 [2024-10-01 20:34:45.340287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:36:50.352 [2024-10-01 20:34:45.340300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.352 [2024-10-01 20:34:45.340427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.352 [2024-10-01 20:34:45.340435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:50.352 [2024-10-01 20:34:45.340443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:36:50.352 [2024-10-01 20:34:45.340450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.352 [2024-10-01 20:34:45.353754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.352 [2024-10-01 20:34:45.353785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:50.352 [2024-10-01 20:34:45.353795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.284 ms 00:36:50.353 [2024-10-01 20:34:45.353802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.366327] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:36:50.353 [2024-10-01 20:34:45.366362] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:36:50.353 [2024-10-01 20:34:45.366373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.353 [2024-10-01 20:34:45.366381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:36:50.353 [2024-10-01 20:34:45.366389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.469 ms 00:36:50.353 [2024-10-01 20:34:45.366397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.390529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.353 [2024-10-01 20:34:45.390567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:36:50.353 [2024-10-01 20:34:45.390579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.097 ms 00:36:50.353 [2024-10-01 20:34:45.390588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.402137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.353 [2024-10-01 20:34:45.402171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:36:50.353 [2024-10-01 20:34:45.402186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.524 ms 00:36:50.353 [2024-10-01 20:34:45.402193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.413297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.353 [2024-10-01 20:34:45.413330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:36:50.353 [2024-10-01 20:34:45.413340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.071 ms 00:36:50.353 [2024-10-01 20:34:45.413347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.413986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.353 [2024-10-01 20:34:45.414013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:50.353 [2024-10-01 20:34:45.414022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:36:50.353 [2024-10-01 20:34:45.414029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.470494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.353 [2024-10-01 20:34:45.470538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:36:50.353 [2024-10-01 20:34:45.470550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.447 ms 00:36:50.353 [2024-10-01 20:34:45.470558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.480886] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:50.353 [2024-10-01 20:34:45.483549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.353 [2024-10-01 20:34:45.483582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:50.353 [2024-10-01 20:34:45.483593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.947 ms 00:36:50.353 [2024-10-01 20:34:45.483607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.483705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.353 [2024-10-01 20:34:45.483716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:36:50.353 [2024-10-01 20:34:45.483724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:36:50.353 [2024-10-01 20:34:45.483731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.485341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.353 [2024-10-01 20:34:45.485374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:50.353 [2024-10-01 20:34:45.485385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:36:50.353 [2024-10-01 20:34:45.485393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.485421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.353 [2024-10-01 20:34:45.485430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:50.353 [2024-10-01 20:34:45.485438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:50.353 [2024-10-01 20:34:45.485445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.485477] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:36:50.353 [2024-10-01 20:34:45.485487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.353 [2024-10-01 20:34:45.485494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:36:50.353 [2024-10-01 20:34:45.485505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:36:50.353 [2024-10-01 20:34:45.485512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.508267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.353 [2024-10-01 20:34:45.508302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:50.353 [2024-10-01 20:34:45.508313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.738 ms 00:36:50.353 [2024-10-01 20:34:45.508321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.508389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.353 [2024-10-01 20:34:45.508399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:50.353 [2024-10-01 20:34:45.508407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:36:50.353 [2024-10-01 20:34:45.508415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.353 [2024-10-01 20:34:45.509339] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 254.097 ms, result 0 00:37:12.940  Copying: 1416/1048576 [kB] (1416 kBps) Copying: 13/1024 [MB] (11 MBps) Copying: 62/1024 [MB] (49 MBps) Copying: 116/1024 [MB] (53 MBps) Copying: 166/1024 [MB] (50 MBps) Copying: 218/1024 [MB] (52 MBps) Copying: 276/1024 [MB] (57 MBps) Copying: 328/1024 [MB] (52 MBps) Copying: 379/1024 [MB] (50 MBps) Copying: 431/1024 [MB] (51 MBps) Copying: 483/1024 [MB] (52 MBps) Copying: 536/1024 [MB] (52 MBps) Copying: 589/1024 [MB] (52 MBps) Copying: 638/1024 [MB] (49 MBps) Copying: 688/1024 [MB] (49 MBps) Copying: 740/1024 [MB] (52 MBps) Copying: 790/1024 [MB] (50 MBps) Copying: 841/1024 [MB] (50 MBps) Copying: 890/1024 [MB] (49 MBps) Copying: 940/1024 [MB] (49 MBps) Copying: 990/1024 [MB] (50 MBps) Copying: 1024/1024 [MB] (average 47 MBps)[2024-10-01 20:35:07.862005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.940 [2024-10-01 20:35:07.862368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:12.940 [2024-10-01 20:35:07.862448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:37:12.940 [2024-10-01 20:35:07.862476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.940 [2024-10-01 20:35:07.862567] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:12.940 [2024-10-01 20:35:07.865594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.940 [2024-10-01 20:35:07.865742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:12.940 [2024-10-01 20:35:07.865808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.978 ms 00:37:12.940 [2024-10-01 20:35:07.865832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.940 [2024-10-01 20:35:07.866090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.940 [2024-10-01 20:35:07.866190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:12.940 [2024-10-01 20:35:07.866248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:37:12.940 [2024-10-01 20:35:07.866272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.940 [2024-10-01 20:35:07.875880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.940 [2024-10-01 20:35:07.876004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:37:12.940 [2024-10-01 20:35:07.876105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.574 ms 00:37:12.940 [2024-10-01 20:35:07.876134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.940 [2024-10-01 20:35:07.882360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.940 [2024-10-01 20:35:07.882484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:37:12.940 [2024-10-01 20:35:07.882557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.187 ms 00:37:12.941 [2024-10-01 20:35:07.882615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.941 [2024-10-01 20:35:07.910492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.941 [2024-10-01 20:35:07.910658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:37:12.941 [2024-10-01 20:35:07.910731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.789 ms 00:37:12.941 [2024-10-01 20:35:07.910755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.941 [2024-10-01 20:35:07.924576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.941 [2024-10-01 20:35:07.924733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:37:12.941 [2024-10-01 20:35:07.924831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.687 ms 00:37:12.941 [2024-10-01 20:35:07.924853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.941 [2024-10-01 20:35:07.926780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.941 [2024-10-01 20:35:07.926879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:37:12.941 [2024-10-01 20:35:07.926925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.850 ms 00:37:12.941 [2024-10-01 20:35:07.926946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.941 [2024-10-01 20:35:07.950722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.941 [2024-10-01 20:35:07.950877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:37:12.941 [2024-10-01 20:35:07.950926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.747 ms 00:37:12.941 [2024-10-01 20:35:07.950948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.941 [2024-10-01 20:35:07.974029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.941 [2024-10-01 20:35:07.974339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:37:12.941 [2024-10-01 20:35:07.974390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.985 ms 00:37:12.941 [2024-10-01 20:35:07.974412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.941 [2024-10-01 20:35:07.997151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.941 [2024-10-01 20:35:07.997311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:37:12.941 [2024-10-01 20:35:07.997360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.646 ms 00:37:12.941 [2024-10-01 20:35:07.997381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.941 [2024-10-01 20:35:08.019917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.941 [2024-10-01 20:35:08.020105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:37:12.941 [2024-10-01 20:35:08.020154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.418 ms 00:37:12.941 [2024-10-01 20:35:08.020176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.941 [2024-10-01 20:35:08.020262] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:12.941 [2024-10-01 20:35:08.020291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:37:12.941 [2024-10-01 20:35:08.020329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:37:12.941 [2024-10-01 20:35:08.020358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:12.941 [2024-10-01 20:35:08.020929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.020937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.020944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.020952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.020959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.020967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.020974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.020981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.020989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.020996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:12.942 [2024-10-01 20:35:08.021194] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:12.942 [2024-10-01 20:35:08.021202] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2cbc39b4-488a-417a-8a44-dbbd61f528fe 00:37:12.942 [2024-10-01 20:35:08.021210] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:37:12.942 [2024-10-01 20:35:08.021217] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135616 00:37:12.942 [2024-10-01 20:35:08.021224] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133632 00:37:12.942 [2024-10-01 20:35:08.021232] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:37:12.942 [2024-10-01 20:35:08.021239] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:12.942 [2024-10-01 20:35:08.021247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:12.942 [2024-10-01 20:35:08.021254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:12.942 [2024-10-01 20:35:08.021260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:12.942 [2024-10-01 20:35:08.021267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:12.942 [2024-10-01 20:35:08.021274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.942 [2024-10-01 20:35:08.021281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:12.942 [2024-10-01 20:35:08.021297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:37:12.942 [2024-10-01 20:35:08.021306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.942 [2024-10-01 20:35:08.033724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.942 [2024-10-01 20:35:08.033756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:12.942 [2024-10-01 20:35:08.033766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.397 ms 00:37:12.942 [2024-10-01 20:35:08.033775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.942 [2024-10-01 20:35:08.034152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:12.942 [2024-10-01 20:35:08.034186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:12.942 [2024-10-01 20:35:08.034195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:37:12.942 [2024-10-01 20:35:08.034202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.942 [2024-10-01 20:35:08.063208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:12.942 [2024-10-01 20:35:08.063255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:12.942 [2024-10-01 20:35:08.063265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:12.942 [2024-10-01 20:35:08.063273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.942 [2024-10-01 20:35:08.063337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:12.942 [2024-10-01 20:35:08.063348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:12.942 [2024-10-01 20:35:08.063355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:12.942 [2024-10-01 20:35:08.063362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.942 [2024-10-01 20:35:08.063417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:12.942 [2024-10-01 20:35:08.063426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:12.942 [2024-10-01 20:35:08.063434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:12.942 [2024-10-01 20:35:08.063440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.942 [2024-10-01 20:35:08.063455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:12.942 [2024-10-01 20:35:08.063463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:12.942 [2024-10-01 20:35:08.063472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:12.942 [2024-10-01 20:35:08.063479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:12.942 [2024-10-01 20:35:08.142274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:12.942 [2024-10-01 20:35:08.142323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:12.942 [2024-10-01 20:35:08.142335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:12.942 [2024-10-01 20:35:08.142343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.201 [2024-10-01 20:35:08.204370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.201 [2024-10-01 20:35:08.204421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:13.201 [2024-10-01 20:35:08.204431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.201 [2024-10-01 20:35:08.204439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.201 [2024-10-01 20:35:08.204506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.201 [2024-10-01 20:35:08.204515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:13.201 [2024-10-01 20:35:08.204523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.201 [2024-10-01 20:35:08.204531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.201 [2024-10-01 20:35:08.204563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.201 [2024-10-01 20:35:08.204571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:13.201 [2024-10-01 20:35:08.204578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.201 [2024-10-01 20:35:08.204587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.201 [2024-10-01 20:35:08.204671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.201 [2024-10-01 20:35:08.204680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:13.201 [2024-10-01 20:35:08.204688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.201 [2024-10-01 20:35:08.204711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.201 [2024-10-01 20:35:08.204738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.201 [2024-10-01 20:35:08.204748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:13.201 [2024-10-01 20:35:08.204755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.201 [2024-10-01 20:35:08.204762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.201 [2024-10-01 20:35:08.204799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.201 [2024-10-01 20:35:08.204807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:13.201 [2024-10-01 20:35:08.204814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.201 [2024-10-01 20:35:08.204821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.201 [2024-10-01 20:35:08.204859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:13.201 [2024-10-01 20:35:08.204868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:13.201 [2024-10-01 20:35:08.204876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:13.201 [2024-10-01 20:35:08.204885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:13.201 [2024-10-01 20:35:08.204988] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.964 ms, result 0 00:37:14.574 00:37:14.574 00:37:14.574 20:35:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:37:16.504 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:37:16.504 20:35:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:16.504 [2024-10-01 20:35:11.601341] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:37:16.504 [2024-10-01 20:35:11.601457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77914 ] 00:37:16.808 [2024-10-01 20:35:11.751359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.808 [2024-10-01 20:35:11.944568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.383 [2024-10-01 20:35:12.384437] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:17.383 [2024-10-01 20:35:12.384507] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:17.383 [2024-10-01 20:35:12.537496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.383 [2024-10-01 20:35:12.537544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:17.383 [2024-10-01 20:35:12.537557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:17.383 [2024-10-01 20:35:12.537570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.383 [2024-10-01 20:35:12.537616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.383 [2024-10-01 20:35:12.537625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:17.383 [2024-10-01 20:35:12.537634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:37:17.383 [2024-10-01 20:35:12.537641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.383 [2024-10-01 20:35:12.537660] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:17.383 [2024-10-01 20:35:12.538392] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:17.383 [2024-10-01 20:35:12.538415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.383 [2024-10-01 20:35:12.538423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:17.383 [2024-10-01 20:35:12.538432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:37:17.383 [2024-10-01 20:35:12.538439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.383 [2024-10-01 20:35:12.539563] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:37:17.383 [2024-10-01 20:35:12.552312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.383 [2024-10-01 20:35:12.552345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:37:17.383 [2024-10-01 20:35:12.552357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.749 ms 00:37:17.383 [2024-10-01 20:35:12.552365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.383 [2024-10-01 20:35:12.552421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.383 [2024-10-01 20:35:12.552433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:37:17.383 [2024-10-01 20:35:12.552442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:37:17.383 [2024-10-01 20:35:12.552449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.383 [2024-10-01 20:35:12.558949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.383 [2024-10-01 20:35:12.558977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:17.383 [2024-10-01 20:35:12.558987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.441 ms 00:37:17.383 [2024-10-01 20:35:12.558994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.383 [2024-10-01 20:35:12.559075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.383 [2024-10-01 20:35:12.559084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:17.383 [2024-10-01 20:35:12.559093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:37:17.383 [2024-10-01 20:35:12.559100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.383 [2024-10-01 20:35:12.559142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.383 [2024-10-01 20:35:12.559157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:17.383 [2024-10-01 20:35:12.559165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:37:17.383 [2024-10-01 20:35:12.559172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.383 [2024-10-01 20:35:12.559194] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:17.383 [2024-10-01 20:35:12.562575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.383 [2024-10-01 20:35:12.562602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:17.383 [2024-10-01 20:35:12.562611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.386 ms 00:37:17.383 [2024-10-01 20:35:12.562618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.383 [2024-10-01 20:35:12.562646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.383 [2024-10-01 20:35:12.562654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:17.383 [2024-10-01 20:35:12.562663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:37:17.383 [2024-10-01 20:35:12.562672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.383 [2024-10-01 20:35:12.562715] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:37:17.383 [2024-10-01 20:35:12.562734] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:37:17.383 [2024-10-01 20:35:12.562768] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:37:17.383 [2024-10-01 20:35:12.562782] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:37:17.383 [2024-10-01 20:35:12.562884] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:17.383 [2024-10-01 20:35:12.562893] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:17.383 [2024-10-01 20:35:12.562905] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:37:17.383 [2024-10-01 20:35:12.562917] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:17.383 [2024-10-01 20:35:12.562926] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:17.383 [2024-10-01 20:35:12.562933] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:37:17.383 [2024-10-01 20:35:12.562941] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:17.383 [2024-10-01 20:35:12.562948] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:17.383 [2024-10-01 20:35:12.562955] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:17.384 [2024-10-01 20:35:12.562962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.384 [2024-10-01 20:35:12.562970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:17.384 [2024-10-01 20:35:12.562978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:37:17.384 [2024-10-01 20:35:12.562990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.384 [2024-10-01 20:35:12.563077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.384 [2024-10-01 20:35:12.563092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:17.384 [2024-10-01 20:35:12.563101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:37:17.384 [2024-10-01 20:35:12.563109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.384 [2024-10-01 20:35:12.563220] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:17.384 [2024-10-01 20:35:12.563231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:17.384 [2024-10-01 20:35:12.563239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:17.384 [2024-10-01 20:35:12.563246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.384 [2024-10-01 20:35:12.563254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:17.384 [2024-10-01 20:35:12.563261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:17.384 [2024-10-01 20:35:12.563268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:37:17.384 [2024-10-01 20:35:12.563276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:17.384 [2024-10-01 20:35:12.563283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:37:17.384 [2024-10-01 20:35:12.563290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:17.384 [2024-10-01 20:35:12.563297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:17.384 [2024-10-01 20:35:12.563303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:37:17.384 [2024-10-01 20:35:12.563310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:17.384 [2024-10-01 20:35:12.563321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:17.384 [2024-10-01 20:35:12.563328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:37:17.384 [2024-10-01 20:35:12.563335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.384 [2024-10-01 20:35:12.563341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:17.384 [2024-10-01 20:35:12.563348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:37:17.384 [2024-10-01 20:35:12.563354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.384 [2024-10-01 20:35:12.563361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:17.384 [2024-10-01 20:35:12.563367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:37:17.384 [2024-10-01 20:35:12.563374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:17.384 [2024-10-01 20:35:12.563381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:17.384 [2024-10-01 20:35:12.563387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:37:17.384 [2024-10-01 20:35:12.563394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:17.384 [2024-10-01 20:35:12.563401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:17.384 [2024-10-01 20:35:12.563407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:37:17.384 [2024-10-01 20:35:12.563413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:17.384 [2024-10-01 20:35:12.563419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:17.384 [2024-10-01 20:35:12.563425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:37:17.384 [2024-10-01 20:35:12.563431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:17.384 [2024-10-01 20:35:12.563438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:17.384 [2024-10-01 20:35:12.563444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:37:17.384 [2024-10-01 20:35:12.563451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:17.384 [2024-10-01 20:35:12.563458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:17.384 [2024-10-01 20:35:12.563465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:37:17.384 [2024-10-01 20:35:12.563471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:17.384 [2024-10-01 20:35:12.563478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:17.384 [2024-10-01 20:35:12.563484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:37:17.384 [2024-10-01 20:35:12.563491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.384 [2024-10-01 20:35:12.563497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:17.384 [2024-10-01 20:35:12.563504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:37:17.384 [2024-10-01 20:35:12.563510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.384 [2024-10-01 20:35:12.563517] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:17.384 [2024-10-01 20:35:12.563525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:17.384 [2024-10-01 20:35:12.563533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:17.384 [2024-10-01 20:35:12.563540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.384 [2024-10-01 20:35:12.563547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:17.384 [2024-10-01 20:35:12.563554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:17.384 [2024-10-01 20:35:12.563561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:17.384 [2024-10-01 20:35:12.563568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:17.384 [2024-10-01 20:35:12.563577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:17.384 [2024-10-01 20:35:12.563588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:17.384 [2024-10-01 20:35:12.563596] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:17.384 [2024-10-01 20:35:12.563605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:17.384 [2024-10-01 20:35:12.563614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:37:17.384 [2024-10-01 20:35:12.563621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:37:17.384 [2024-10-01 20:35:12.563628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:37:17.384 [2024-10-01 20:35:12.563635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:37:17.384 [2024-10-01 20:35:12.563642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:37:17.384 [2024-10-01 20:35:12.563649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:37:17.384 [2024-10-01 20:35:12.563657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:37:17.384 [2024-10-01 20:35:12.563664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:37:17.384 [2024-10-01 20:35:12.563670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:37:17.384 [2024-10-01 20:35:12.563678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:37:17.384 [2024-10-01 20:35:12.563686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:37:17.384 [2024-10-01 20:35:12.563703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:37:17.384 [2024-10-01 20:35:12.563711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:37:17.384 [2024-10-01 20:35:12.563718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:37:17.384 [2024-10-01 20:35:12.563725] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:17.384 [2024-10-01 20:35:12.563733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:17.384 [2024-10-01 20:35:12.563742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:17.384 [2024-10-01 20:35:12.563749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:17.384 [2024-10-01 20:35:12.563757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:17.384 [2024-10-01 20:35:12.563765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:17.384 [2024-10-01 20:35:12.563772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.384 [2024-10-01 20:35:12.563780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:17.384 [2024-10-01 20:35:12.563787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:37:17.384 [2024-10-01 20:35:12.563794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.384 [2024-10-01 20:35:12.590917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.384 [2024-10-01 20:35:12.590952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:17.384 [2024-10-01 20:35:12.590963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.080 ms 00:37:17.384 [2024-10-01 20:35:12.590971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.384 [2024-10-01 20:35:12.591056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.384 [2024-10-01 20:35:12.591065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:37:17.384 [2024-10-01 20:35:12.591073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:37:17.384 [2024-10-01 20:35:12.591080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.623419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.623452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:17.642 [2024-10-01 20:35:12.623465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.286 ms 00:37:17.642 [2024-10-01 20:35:12.623473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.623506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.623514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:17.642 [2024-10-01 20:35:12.623522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:37:17.642 [2024-10-01 20:35:12.623529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.623971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.623992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:17.642 [2024-10-01 20:35:12.624001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:37:17.642 [2024-10-01 20:35:12.624013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.624132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.624140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:17.642 [2024-10-01 20:35:12.624148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:37:17.642 [2024-10-01 20:35:12.624156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.637373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.637402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:17.642 [2024-10-01 20:35:12.637411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.198 ms 00:37:17.642 [2024-10-01 20:35:12.637419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.649846] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:37:17.642 [2024-10-01 20:35:12.649877] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:37:17.642 [2024-10-01 20:35:12.649888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.649896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:37:17.642 [2024-10-01 20:35:12.649904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.361 ms 00:37:17.642 [2024-10-01 20:35:12.649911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.679160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.679196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:37:17.642 [2024-10-01 20:35:12.679207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.212 ms 00:37:17.642 [2024-10-01 20:35:12.679214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.688605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.688636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:37:17.642 [2024-10-01 20:35:12.688645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.353 ms 00:37:17.642 [2024-10-01 20:35:12.688650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.697547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.697575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:37:17.642 [2024-10-01 20:35:12.697583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.845 ms 00:37:17.642 [2024-10-01 20:35:12.697589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.698087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.698106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:37:17.642 [2024-10-01 20:35:12.698114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:37:17.642 [2024-10-01 20:35:12.698120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.743880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.743923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:37:17.642 [2024-10-01 20:35:12.743933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.745 ms 00:37:17.642 [2024-10-01 20:35:12.743940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.753116] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:37:17.642 [2024-10-01 20:35:12.755737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.755762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:37:17.642 [2024-10-01 20:35:12.755772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.754 ms 00:37:17.642 [2024-10-01 20:35:12.755783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.755868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.755876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:37:17.642 [2024-10-01 20:35:12.755884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:37:17.642 [2024-10-01 20:35:12.755890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.642 [2024-10-01 20:35:12.756515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.642 [2024-10-01 20:35:12.756542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:37:17.642 [2024-10-01 20:35:12.756550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:37:17.643 [2024-10-01 20:35:12.756558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.643 [2024-10-01 20:35:12.756582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.643 [2024-10-01 20:35:12.756589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:37:17.643 [2024-10-01 20:35:12.756595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:17.643 [2024-10-01 20:35:12.756601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.643 [2024-10-01 20:35:12.756628] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:37:17.643 [2024-10-01 20:35:12.756636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.643 [2024-10-01 20:35:12.756643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:37:17.643 [2024-10-01 20:35:12.756652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:37:17.643 [2024-10-01 20:35:12.756658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.643 [2024-10-01 20:35:12.775138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.643 [2024-10-01 20:35:12.775167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:37:17.643 [2024-10-01 20:35:12.775176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.466 ms 00:37:17.643 [2024-10-01 20:35:12.775182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.643 [2024-10-01 20:35:12.775243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.643 [2024-10-01 20:35:12.775251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:37:17.643 [2024-10-01 20:35:12.775258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:37:17.643 [2024-10-01 20:35:12.775264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.643 [2024-10-01 20:35:12.776107] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 238.271 ms, result 0 00:37:39.060  Copying: 48/1024 [MB] (48 MBps) Copying: 96/1024 [MB] (48 MBps) Copying: 143/1024 [MB] (46 MBps) Copying: 190/1024 [MB] (46 MBps) Copying: 237/1024 [MB] (46 MBps) Copying: 283/1024 [MB] (46 MBps) Copying: 330/1024 [MB] (47 MBps) Copying: 379/1024 [MB] (48 MBps) Copying: 426/1024 [MB] (47 MBps) Copying: 476/1024 [MB] (49 MBps) Copying: 525/1024 [MB] (49 MBps) Copying: 573/1024 [MB] (47 MBps) Copying: 622/1024 [MB] (48 MBps) Copying: 669/1024 [MB] (47 MBps) Copying: 715/1024 [MB] (46 MBps) Copying: 763/1024 [MB] (47 MBps) Copying: 813/1024 [MB] (50 MBps) Copying: 861/1024 [MB] (47 MBps) Copying: 911/1024 [MB] (49 MBps) Copying: 959/1024 [MB] (48 MBps) Copying: 1008/1024 [MB] (49 MBps) Copying: 1024/1024 [MB] (average 48 MBps)[2024-10-01 20:35:34.262572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.061 [2024-10-01 20:35:34.262628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:39.061 [2024-10-01 20:35:34.262641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:37:39.061 [2024-10-01 20:35:34.262659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.061 [2024-10-01 20:35:34.262680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:39.061 [2024-10-01 20:35:34.265277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.061 [2024-10-01 20:35:34.265305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:39.061 [2024-10-01 20:35:34.265315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.571 ms 00:37:39.061 [2024-10-01 20:35:34.265324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.061 [2024-10-01 20:35:34.265540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.061 [2024-10-01 20:35:34.265555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:39.061 [2024-10-01 20:35:34.265564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:37:39.061 [2024-10-01 20:35:34.265571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.061 [2024-10-01 20:35:34.270977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.061 [2024-10-01 20:35:34.271000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:37:39.061 [2024-10-01 20:35:34.271010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.389 ms 00:37:39.061 [2024-10-01 20:35:34.271018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.319 [2024-10-01 20:35:34.277846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.319 [2024-10-01 20:35:34.277874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:37:39.319 [2024-10-01 20:35:34.277884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.810 ms 00:37:39.319 [2024-10-01 20:35:34.277893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.319 [2024-10-01 20:35:34.304171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.319 [2024-10-01 20:35:34.304205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:37:39.319 [2024-10-01 20:35:34.304218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.214 ms 00:37:39.319 [2024-10-01 20:35:34.304226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.319 [2024-10-01 20:35:34.317826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.319 [2024-10-01 20:35:34.317861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:37:39.319 [2024-10-01 20:35:34.317873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.578 ms 00:37:39.319 [2024-10-01 20:35:34.317881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.319 [2024-10-01 20:35:34.319595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.319 [2024-10-01 20:35:34.319622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:37:39.319 [2024-10-01 20:35:34.319630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.693 ms 00:37:39.319 [2024-10-01 20:35:34.319638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.319 [2024-10-01 20:35:34.341908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.319 [2024-10-01 20:35:34.341934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:37:39.319 [2024-10-01 20:35:34.341944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.256 ms 00:37:39.319 [2024-10-01 20:35:34.341953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.319 [2024-10-01 20:35:34.364552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.319 [2024-10-01 20:35:34.364577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:37:39.319 [2024-10-01 20:35:34.364588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.581 ms 00:37:39.319 [2024-10-01 20:35:34.364596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.319 [2024-10-01 20:35:34.386388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.319 [2024-10-01 20:35:34.386414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:37:39.319 [2024-10-01 20:35:34.386424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.776 ms 00:37:39.319 [2024-10-01 20:35:34.386431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.319 [2024-10-01 20:35:34.408144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.319 [2024-10-01 20:35:34.408169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:37:39.319 [2024-10-01 20:35:34.408179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.674 ms 00:37:39.319 [2024-10-01 20:35:34.408186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.319 [2024-10-01 20:35:34.408201] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:39.319 [2024-10-01 20:35:34.408215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:37:39.319 [2024-10-01 20:35:34.408226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:37:39.319 [2024-10-01 20:35:34.408234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:39.319 [2024-10-01 20:35:34.408597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:39.320 [2024-10-01 20:35:34.408977] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:39.320 [2024-10-01 20:35:34.408984] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2cbc39b4-488a-417a-8a44-dbbd61f528fe 00:37:39.320 [2024-10-01 20:35:34.408992] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:37:39.320 [2024-10-01 20:35:34.408999] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:37:39.320 [2024-10-01 20:35:34.409006] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:37:39.320 [2024-10-01 20:35:34.409013] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:37:39.320 [2024-10-01 20:35:34.409020] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:39.320 [2024-10-01 20:35:34.409031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:39.320 [2024-10-01 20:35:34.409038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:39.320 [2024-10-01 20:35:34.409045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:39.320 [2024-10-01 20:35:34.409051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:39.320 [2024-10-01 20:35:34.409058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.320 [2024-10-01 20:35:34.409071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:39.320 [2024-10-01 20:35:34.409079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:37:39.320 [2024-10-01 20:35:34.409086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.320 [2024-10-01 20:35:34.421355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.320 [2024-10-01 20:35:34.421377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:39.320 [2024-10-01 20:35:34.421388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.254 ms 00:37:39.320 [2024-10-01 20:35:34.421401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.320 [2024-10-01 20:35:34.421751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:39.320 [2024-10-01 20:35:34.421764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:39.320 [2024-10-01 20:35:34.421772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:37:39.320 [2024-10-01 20:35:34.421779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.320 [2024-10-01 20:35:34.450328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:39.320 [2024-10-01 20:35:34.450360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:39.320 [2024-10-01 20:35:34.450370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:39.320 [2024-10-01 20:35:34.450377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.320 [2024-10-01 20:35:34.450427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:39.320 [2024-10-01 20:35:34.450435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:39.320 [2024-10-01 20:35:34.450444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:39.320 [2024-10-01 20:35:34.450451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.320 [2024-10-01 20:35:34.450501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:39.320 [2024-10-01 20:35:34.450510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:39.320 [2024-10-01 20:35:34.450518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:39.320 [2024-10-01 20:35:34.450529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.320 [2024-10-01 20:35:34.450544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:39.320 [2024-10-01 20:35:34.450551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:39.320 [2024-10-01 20:35:34.450559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:39.320 [2024-10-01 20:35:34.450566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.578 [2024-10-01 20:35:34.528299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:39.578 [2024-10-01 20:35:34.528342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:39.578 [2024-10-01 20:35:34.528359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:39.578 [2024-10-01 20:35:34.528366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.578 [2024-10-01 20:35:34.592596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:39.578 [2024-10-01 20:35:34.592644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:39.578 [2024-10-01 20:35:34.592655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:39.578 [2024-10-01 20:35:34.592664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.578 [2024-10-01 20:35:34.592751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:39.578 [2024-10-01 20:35:34.592761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:39.578 [2024-10-01 20:35:34.592770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:39.578 [2024-10-01 20:35:34.592777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.578 [2024-10-01 20:35:34.592813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:39.578 [2024-10-01 20:35:34.592821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:39.578 [2024-10-01 20:35:34.592829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:39.578 [2024-10-01 20:35:34.592837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.578 [2024-10-01 20:35:34.592923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:39.578 [2024-10-01 20:35:34.592932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:39.578 [2024-10-01 20:35:34.592940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:39.578 [2024-10-01 20:35:34.592947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.578 [2024-10-01 20:35:34.592977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:39.578 [2024-10-01 20:35:34.592986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:39.578 [2024-10-01 20:35:34.592993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:39.578 [2024-10-01 20:35:34.593000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.578 [2024-10-01 20:35:34.593034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:39.578 [2024-10-01 20:35:34.593042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:39.578 [2024-10-01 20:35:34.593051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:39.578 [2024-10-01 20:35:34.593058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.578 [2024-10-01 20:35:34.593099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:39.578 [2024-10-01 20:35:34.593108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:39.578 [2024-10-01 20:35:34.593116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:39.578 [2024-10-01 20:35:34.593123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:39.578 [2024-10-01 20:35:34.593234] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 330.637 ms, result 0 00:37:40.952 00:37:40.952 00:37:40.952 20:35:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:37:42.852 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:37:42.852 20:35:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:37:42.852 20:35:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:37:42.852 20:35:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:42.852 20:35:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:37:42.852 20:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:37:43.110 20:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:37:43.110 20:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:37:43.110 Process with pid 76641 is not found 00:37:43.110 20:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 76641 00:37:43.110 20:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 76641 ']' 00:37:43.110 20:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 76641 00:37:43.110 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76641) - No such process 00:37:43.110 20:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 76641 is not found' 00:37:43.110 20:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:37:43.369 Remove shared memory files 00:37:43.369 20:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:37:43.369 20:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:43.369 20:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:37:43.369 20:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:37:43.369 20:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:37:43.369 20:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:43.369 20:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:37:43.369 00:37:43.369 real 2m23.150s 00:37:43.369 user 2m41.655s 00:37:43.369 sys 0m23.405s 00:37:43.369 20:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:43.369 20:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:43.369 ************************************ 00:37:43.369 END TEST ftl_dirty_shutdown 00:37:43.369 ************************************ 00:37:43.369 20:35:38 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:37:43.369 20:35:38 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:43.369 20:35:38 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:43.369 20:35:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:43.369 ************************************ 00:37:43.369 START TEST ftl_upgrade_shutdown 00:37:43.369 ************************************ 00:37:43.369 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:37:43.369 * Looking for test storage... 00:37:43.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:37:43.369 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:43.369 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:37:43.369 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:43.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.633 --rc genhtml_branch_coverage=1 00:37:43.633 --rc genhtml_function_coverage=1 00:37:43.633 --rc genhtml_legend=1 00:37:43.633 --rc geninfo_all_blocks=1 00:37:43.633 --rc geninfo_unexecuted_blocks=1 00:37:43.633 00:37:43.633 ' 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:43.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.633 --rc genhtml_branch_coverage=1 00:37:43.633 --rc genhtml_function_coverage=1 00:37:43.633 --rc genhtml_legend=1 00:37:43.633 --rc geninfo_all_blocks=1 00:37:43.633 --rc geninfo_unexecuted_blocks=1 00:37:43.633 00:37:43.633 ' 00:37:43.633 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:43.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.634 --rc genhtml_branch_coverage=1 00:37:43.634 --rc genhtml_function_coverage=1 00:37:43.634 --rc genhtml_legend=1 00:37:43.634 --rc geninfo_all_blocks=1 00:37:43.634 --rc geninfo_unexecuted_blocks=1 00:37:43.634 00:37:43.634 ' 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:43.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.634 --rc genhtml_branch_coverage=1 00:37:43.634 --rc genhtml_function_coverage=1 00:37:43.634 --rc genhtml_legend=1 00:37:43.634 --rc geninfo_all_blocks=1 00:37:43.634 --rc geninfo_unexecuted_blocks=1 00:37:43.634 00:37:43.634 ' 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78262 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78262 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78262 ']' 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:43.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:43.634 20:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:37:43.634 [2024-10-01 20:35:38.711619] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:37:43.634 [2024-10-01 20:35:38.711750] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78262 ] 00:37:43.933 [2024-10-01 20:35:38.862590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.933 [2024-10-01 20:35:39.051612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:44.867 20:35:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:44.867 20:35:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:37:44.867 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:44.867 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:37:44.867 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:37:44.867 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:44.867 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:37:44.867 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:37:44.868 20:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:37:45.126 20:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:37:45.126 20:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:37:45.126 20:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:37:45.126 20:35:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:37:45.126 20:35:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:37:45.126 20:35:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:37:45.126 20:35:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:37:45.126 20:35:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:37:45.384 20:35:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:37:45.384 { 00:37:45.384 "name": "basen1", 00:37:45.384 "aliases": [ 00:37:45.384 "1d9133f1-27a6-46e8-8bcb-5f3e22e3ff0d" 00:37:45.384 ], 00:37:45.384 "product_name": "NVMe disk", 00:37:45.384 "block_size": 4096, 00:37:45.384 "num_blocks": 1310720, 00:37:45.384 "uuid": "1d9133f1-27a6-46e8-8bcb-5f3e22e3ff0d", 00:37:45.384 "numa_id": -1, 00:37:45.384 "assigned_rate_limits": { 00:37:45.384 "rw_ios_per_sec": 0, 00:37:45.384 "rw_mbytes_per_sec": 0, 00:37:45.384 "r_mbytes_per_sec": 0, 00:37:45.384 "w_mbytes_per_sec": 0 00:37:45.384 }, 00:37:45.384 "claimed": true, 00:37:45.384 "claim_type": "read_many_write_one", 00:37:45.384 "zoned": false, 00:37:45.384 "supported_io_types": { 00:37:45.384 "read": true, 00:37:45.384 "write": true, 00:37:45.384 "unmap": true, 00:37:45.384 "flush": true, 00:37:45.384 "reset": true, 00:37:45.384 "nvme_admin": true, 00:37:45.384 "nvme_io": true, 00:37:45.384 "nvme_io_md": false, 00:37:45.384 "write_zeroes": true, 00:37:45.384 "zcopy": false, 00:37:45.384 "get_zone_info": false, 00:37:45.384 "zone_management": false, 00:37:45.384 "zone_append": false, 00:37:45.384 "compare": true, 00:37:45.384 "compare_and_write": false, 00:37:45.384 "abort": true, 00:37:45.384 "seek_hole": false, 00:37:45.384 "seek_data": false, 00:37:45.384 "copy": true, 00:37:45.384 "nvme_iov_md": false 00:37:45.384 }, 00:37:45.385 "driver_specific": { 00:37:45.385 "nvme": [ 00:37:45.385 { 00:37:45.385 "pci_address": "0000:00:11.0", 00:37:45.385 "trid": { 00:37:45.385 "trtype": "PCIe", 00:37:45.385 "traddr": "0000:00:11.0" 00:37:45.385 }, 00:37:45.385 "ctrlr_data": { 00:37:45.385 "cntlid": 0, 00:37:45.385 "vendor_id": "0x1b36", 00:37:45.385 "model_number": "QEMU NVMe Ctrl", 00:37:45.385 "serial_number": "12341", 00:37:45.385 "firmware_revision": "8.0.0", 00:37:45.385 "subnqn": "nqn.2019-08.org.qemu:12341", 00:37:45.385 "oacs": { 00:37:45.385 "security": 0, 00:37:45.385 "format": 1, 00:37:45.385 "firmware": 0, 00:37:45.385 "ns_manage": 1 00:37:45.385 }, 00:37:45.385 "multi_ctrlr": false, 00:37:45.385 "ana_reporting": false 00:37:45.385 }, 00:37:45.385 "vs": { 00:37:45.385 "nvme_version": "1.4" 00:37:45.385 }, 00:37:45.385 "ns_data": { 00:37:45.385 "id": 1, 00:37:45.385 "can_share": false 00:37:45.385 } 00:37:45.385 } 00:37:45.385 ], 00:37:45.385 "mp_policy": "active_passive" 00:37:45.385 } 00:37:45.385 } 00:37:45.385 ]' 00:37:45.385 20:35:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:37:45.385 20:35:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:37:45.385 20:35:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:37:45.385 20:35:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:37:45.385 20:35:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:37:45.385 20:35:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:37:45.385 20:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:37:45.385 20:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:37:45.385 20:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:37:45.385 20:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:45.385 20:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:45.643 20:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=51cd809f-b657-49b6-9753-70061678c261 00:37:45.643 20:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:37:45.643 20:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 51cd809f-b657-49b6-9753-70061678c261 00:37:45.643 20:35:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:37:45.901 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=97fb620c-06f1-4324-9a98-1065644a1c90 00:37:45.901 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 97fb620c-06f1-4324-9a98-1065644a1c90 00:37:46.158 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=83f36d6e-7bfb-40ee-ae75-6ef35a171e27 00:37:46.158 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 83f36d6e-7bfb-40ee-ae75-6ef35a171e27 ]] 00:37:46.158 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 83f36d6e-7bfb-40ee-ae75-6ef35a171e27 5120 00:37:46.158 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:37:46.158 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:37:46.158 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=83f36d6e-7bfb-40ee-ae75-6ef35a171e27 00:37:46.158 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:37:46.158 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 83f36d6e-7bfb-40ee-ae75-6ef35a171e27 00:37:46.158 20:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=83f36d6e-7bfb-40ee-ae75-6ef35a171e27 00:37:46.158 20:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:37:46.158 20:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:37:46.158 20:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:37:46.158 20:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 83f36d6e-7bfb-40ee-ae75-6ef35a171e27 00:37:46.415 20:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:37:46.415 { 00:37:46.415 "name": "83f36d6e-7bfb-40ee-ae75-6ef35a171e27", 00:37:46.415 "aliases": [ 00:37:46.415 "lvs/basen1p0" 00:37:46.415 ], 00:37:46.415 "product_name": "Logical Volume", 00:37:46.415 "block_size": 4096, 00:37:46.415 "num_blocks": 5242880, 00:37:46.415 "uuid": "83f36d6e-7bfb-40ee-ae75-6ef35a171e27", 00:37:46.415 "assigned_rate_limits": { 00:37:46.415 "rw_ios_per_sec": 0, 00:37:46.415 "rw_mbytes_per_sec": 0, 00:37:46.415 "r_mbytes_per_sec": 0, 00:37:46.415 "w_mbytes_per_sec": 0 00:37:46.415 }, 00:37:46.415 "claimed": false, 00:37:46.415 "zoned": false, 00:37:46.415 "supported_io_types": { 00:37:46.415 "read": true, 00:37:46.415 "write": true, 00:37:46.415 "unmap": true, 00:37:46.415 "flush": false, 00:37:46.415 "reset": true, 00:37:46.415 "nvme_admin": false, 00:37:46.415 "nvme_io": false, 00:37:46.415 "nvme_io_md": false, 00:37:46.415 "write_zeroes": true, 00:37:46.416 "zcopy": false, 00:37:46.416 "get_zone_info": false, 00:37:46.416 "zone_management": false, 00:37:46.416 "zone_append": false, 00:37:46.416 "compare": false, 00:37:46.416 "compare_and_write": false, 00:37:46.416 "abort": false, 00:37:46.416 "seek_hole": true, 00:37:46.416 "seek_data": true, 00:37:46.416 "copy": false, 00:37:46.416 "nvme_iov_md": false 00:37:46.416 }, 00:37:46.416 "driver_specific": { 00:37:46.416 "lvol": { 00:37:46.416 "lvol_store_uuid": "97fb620c-06f1-4324-9a98-1065644a1c90", 00:37:46.416 "base_bdev": "basen1", 00:37:46.416 "thin_provision": true, 00:37:46.416 "num_allocated_clusters": 0, 00:37:46.416 "snapshot": false, 00:37:46.416 "clone": false, 00:37:46.416 "esnap_clone": false 00:37:46.416 } 00:37:46.416 } 00:37:46.416 } 00:37:46.416 ]' 00:37:46.416 20:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:37:46.416 20:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:37:46.416 20:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:37:46.416 20:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:37:46.416 20:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:37:46.416 20:35:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:37:46.416 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:37:46.416 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:37:46.416 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:37:46.673 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:37:46.673 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:37:46.673 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:37:46.931 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:37:46.931 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:37:46.931 20:35:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 83f36d6e-7bfb-40ee-ae75-6ef35a171e27 -c cachen1p0 --l2p_dram_limit 2 00:37:47.190 [2024-10-01 20:35:42.185799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:47.190 [2024-10-01 20:35:42.185850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:37:47.190 [2024-10-01 20:35:42.185867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:47.190 [2024-10-01 20:35:42.185876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:47.190 [2024-10-01 20:35:42.185934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:47.190 [2024-10-01 20:35:42.185944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:47.190 [2024-10-01 20:35:42.185954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:37:47.190 [2024-10-01 20:35:42.185962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:47.190 [2024-10-01 20:35:42.185987] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:37:47.190 [2024-10-01 20:35:42.186788] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:37:47.190 [2024-10-01 20:35:42.186813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:47.190 [2024-10-01 20:35:42.186825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:47.190 [2024-10-01 20:35:42.186835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.831 ms 00:37:47.190 [2024-10-01 20:35:42.186842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:47.190 [2024-10-01 20:35:42.187096] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 9d10aaf9-ef6a-43fb-b863-7a4397ee56b5 00:37:47.190 [2024-10-01 20:35:42.188392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:47.191 [2024-10-01 20:35:42.188424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:37:47.191 [2024-10-01 20:35:42.188434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:37:47.191 [2024-10-01 20:35:42.188448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:47.191 [2024-10-01 20:35:42.193722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:47.191 [2024-10-01 20:35:42.193757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:47.191 [2024-10-01 20:35:42.193766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.232 ms 00:37:47.191 [2024-10-01 20:35:42.193776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:47.191 [2024-10-01 20:35:42.193813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:47.191 [2024-10-01 20:35:42.193824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:47.191 [2024-10-01 20:35:42.193834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:37:47.191 [2024-10-01 20:35:42.193847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:47.191 [2024-10-01 20:35:42.193893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:47.191 [2024-10-01 20:35:42.193905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:37:47.191 [2024-10-01 20:35:42.193912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:37:47.191 [2024-10-01 20:35:42.193921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:47.191 [2024-10-01 20:35:42.193941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:37:47.191 [2024-10-01 20:35:42.197574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:47.191 [2024-10-01 20:35:42.197603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:47.191 [2024-10-01 20:35:42.197615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.636 ms 00:37:47.191 [2024-10-01 20:35:42.197623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:47.191 [2024-10-01 20:35:42.197649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:47.191 [2024-10-01 20:35:42.197657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:37:47.191 [2024-10-01 20:35:42.197669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:47.191 [2024-10-01 20:35:42.197676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:47.191 [2024-10-01 20:35:42.197711] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:37:47.191 [2024-10-01 20:35:42.197848] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:37:47.191 [2024-10-01 20:35:42.197869] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:37:47.191 [2024-10-01 20:35:42.197882] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:37:47.191 [2024-10-01 20:35:42.197893] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:37:47.191 [2024-10-01 20:35:42.197902] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:37:47.191 [2024-10-01 20:35:42.197912] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:37:47.191 [2024-10-01 20:35:42.197920] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:37:47.191 [2024-10-01 20:35:42.197928] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:37:47.191 [2024-10-01 20:35:42.197935] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:37:47.191 [2024-10-01 20:35:42.197944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:47.191 [2024-10-01 20:35:42.197951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:37:47.191 [2024-10-01 20:35:42.197960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.234 ms 00:37:47.191 [2024-10-01 20:35:42.197973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:47.191 [2024-10-01 20:35:42.198058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:47.191 [2024-10-01 20:35:42.198076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:37:47.191 [2024-10-01 20:35:42.198085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:37:47.191 [2024-10-01 20:35:42.198092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:47.191 [2024-10-01 20:35:42.198210] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:37:47.191 [2024-10-01 20:35:42.198225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:37:47.191 [2024-10-01 20:35:42.198235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:47.191 [2024-10-01 20:35:42.198242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:47.191 [2024-10-01 20:35:42.198251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:37:47.191 [2024-10-01 20:35:42.198258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:37:47.191 [2024-10-01 20:35:42.198267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:37:47.191 [2024-10-01 20:35:42.198274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:37:47.191 [2024-10-01 20:35:42.198282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:37:47.191 [2024-10-01 20:35:42.198288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:47.191 [2024-10-01 20:35:42.198296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:37:47.191 [2024-10-01 20:35:42.198303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:37:47.191 [2024-10-01 20:35:42.198311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:47.191 [2024-10-01 20:35:42.198322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:37:47.191 [2024-10-01 20:35:42.198331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:37:47.191 [2024-10-01 20:35:42.198337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:47.191 [2024-10-01 20:35:42.198346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:37:47.191 [2024-10-01 20:35:42.198353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:37:47.191 [2024-10-01 20:35:42.198361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:47.191 [2024-10-01 20:35:42.198368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:37:47.191 [2024-10-01 20:35:42.198381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:37:47.191 [2024-10-01 20:35:42.198388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:47.191 [2024-10-01 20:35:42.198397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:37:47.191 [2024-10-01 20:35:42.198404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:37:47.191 [2024-10-01 20:35:42.198412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:47.191 [2024-10-01 20:35:42.198418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:37:47.191 [2024-10-01 20:35:42.198426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:37:47.191 [2024-10-01 20:35:42.198433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:47.191 [2024-10-01 20:35:42.198441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:37:47.191 [2024-10-01 20:35:42.198447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:37:47.191 [2024-10-01 20:35:42.198455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:47.191 [2024-10-01 20:35:42.198464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:37:47.191 [2024-10-01 20:35:42.198474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:37:47.191 [2024-10-01 20:35:42.198481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:47.191 [2024-10-01 20:35:42.198489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:37:47.191 [2024-10-01 20:35:42.198499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:37:47.191 [2024-10-01 20:35:42.198507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:47.191 [2024-10-01 20:35:42.198513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:37:47.191 [2024-10-01 20:35:42.198521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:37:47.191 [2024-10-01 20:35:42.198528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:47.191 [2024-10-01 20:35:42.198536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:37:47.191 [2024-10-01 20:35:42.198542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:37:47.191 [2024-10-01 20:35:42.198550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:47.191 [2024-10-01 20:35:42.198555] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:37:47.191 [2024-10-01 20:35:42.198566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:37:47.191 [2024-10-01 20:35:42.198573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:47.191 [2024-10-01 20:35:42.198582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:47.191 [2024-10-01 20:35:42.198590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:37:47.191 [2024-10-01 20:35:42.198599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:37:47.191 [2024-10-01 20:35:42.198606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:37:47.191 [2024-10-01 20:35:42.198615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:37:47.191 [2024-10-01 20:35:42.198621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:37:47.191 [2024-10-01 20:35:42.198629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:37:47.191 [2024-10-01 20:35:42.198639] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:37:47.191 [2024-10-01 20:35:42.198650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:47.191 [2024-10-01 20:35:42.198659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:37:47.191 [2024-10-01 20:35:42.198668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:37:47.191 [2024-10-01 20:35:42.198675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:37:47.191 [2024-10-01 20:35:42.198684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:37:47.191 [2024-10-01 20:35:42.198702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:37:47.191 [2024-10-01 20:35:42.198711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:37:47.191 [2024-10-01 20:35:42.198718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:37:47.192 [2024-10-01 20:35:42.198726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:37:47.192 [2024-10-01 20:35:42.198734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:37:47.192 [2024-10-01 20:35:42.198744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:37:47.192 [2024-10-01 20:35:42.198751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:37:47.192 [2024-10-01 20:35:42.198760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:37:47.192 [2024-10-01 20:35:42.198771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:37:47.192 [2024-10-01 20:35:42.198780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:37:47.192 [2024-10-01 20:35:42.198787] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:37:47.192 [2024-10-01 20:35:42.198797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:47.192 [2024-10-01 20:35:42.198805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:47.192 [2024-10-01 20:35:42.198814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:37:47.192 [2024-10-01 20:35:42.198821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:37:47.192 [2024-10-01 20:35:42.198830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:37:47.192 [2024-10-01 20:35:42.198841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:47.192 [2024-10-01 20:35:42.198850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:37:47.192 [2024-10-01 20:35:42.198860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.701 ms 00:37:47.192 [2024-10-01 20:35:42.198870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:47.192 [2024-10-01 20:35:42.198909] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:37:47.192 [2024-10-01 20:35:42.198921] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:37:49.089 [2024-10-01 20:35:44.208760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.089 [2024-10-01 20:35:44.208828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:37:49.089 [2024-10-01 20:35:44.208843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2009.842 ms 00:37:49.090 [2024-10-01 20:35:44.208854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.090 [2024-10-01 20:35:44.234353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.090 [2024-10-01 20:35:44.234405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:49.090 [2024-10-01 20:35:44.234420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.253 ms 00:37:49.090 [2024-10-01 20:35:44.234429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.090 [2024-10-01 20:35:44.234521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.090 [2024-10-01 20:35:44.234533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:37:49.090 [2024-10-01 20:35:44.234544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:37:49.090 [2024-10-01 20:35:44.234555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.090 [2024-10-01 20:35:44.265256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.090 [2024-10-01 20:35:44.265304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:49.090 [2024-10-01 20:35:44.265316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.653 ms 00:37:49.090 [2024-10-01 20:35:44.265325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.090 [2024-10-01 20:35:44.265364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.090 [2024-10-01 20:35:44.265373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:49.090 [2024-10-01 20:35:44.265381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:49.090 [2024-10-01 20:35:44.265389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.090 [2024-10-01 20:35:44.265762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.090 [2024-10-01 20:35:44.265790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:49.090 [2024-10-01 20:35:44.265805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:37:49.090 [2024-10-01 20:35:44.265814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.090 [2024-10-01 20:35:44.265857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.090 [2024-10-01 20:35:44.265867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:49.090 [2024-10-01 20:35:44.265874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:37:49.090 [2024-10-01 20:35:44.265885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.090 [2024-10-01 20:35:44.279756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.090 [2024-10-01 20:35:44.279796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:49.090 [2024-10-01 20:35:44.279807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.854 ms 00:37:49.090 [2024-10-01 20:35:44.279816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.090 [2024-10-01 20:35:44.291170] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:37:49.090 [2024-10-01 20:35:44.292168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.090 [2024-10-01 20:35:44.292199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:37:49.090 [2024-10-01 20:35:44.292211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.263 ms 00:37:49.090 [2024-10-01 20:35:44.292218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.348 [2024-10-01 20:35:44.313832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.348 [2024-10-01 20:35:44.313887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:37:49.348 [2024-10-01 20:35:44.313901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.579 ms 00:37:49.348 [2024-10-01 20:35:44.313910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.349 [2024-10-01 20:35:44.313995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.349 [2024-10-01 20:35:44.314005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:37:49.349 [2024-10-01 20:35:44.314018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:37:49.349 [2024-10-01 20:35:44.314025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.349 [2024-10-01 20:35:44.337303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.349 [2024-10-01 20:35:44.337351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:37:49.349 [2024-10-01 20:35:44.337365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.227 ms 00:37:49.349 [2024-10-01 20:35:44.337373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.349 [2024-10-01 20:35:44.360215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.349 [2024-10-01 20:35:44.360262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:37:49.349 [2024-10-01 20:35:44.360275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.795 ms 00:37:49.349 [2024-10-01 20:35:44.360283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.349 [2024-10-01 20:35:44.360874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.349 [2024-10-01 20:35:44.360896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:37:49.349 [2024-10-01 20:35:44.360907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.549 ms 00:37:49.349 [2024-10-01 20:35:44.360914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.349 [2024-10-01 20:35:44.448247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.349 [2024-10-01 20:35:44.448325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:37:49.349 [2024-10-01 20:35:44.448346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 87.284 ms 00:37:49.349 [2024-10-01 20:35:44.448355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.349 [2024-10-01 20:35:44.473601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.349 [2024-10-01 20:35:44.473653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:37:49.349 [2024-10-01 20:35:44.473667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.149 ms 00:37:49.349 [2024-10-01 20:35:44.473676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.349 [2024-10-01 20:35:44.498172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.349 [2024-10-01 20:35:44.498227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:37:49.349 [2024-10-01 20:35:44.498240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.437 ms 00:37:49.349 [2024-10-01 20:35:44.498247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.349 [2024-10-01 20:35:44.522834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.349 [2024-10-01 20:35:44.522889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:37:49.349 [2024-10-01 20:35:44.522903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.534 ms 00:37:49.349 [2024-10-01 20:35:44.522911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.349 [2024-10-01 20:35:44.522970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.349 [2024-10-01 20:35:44.522981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:37:49.349 [2024-10-01 20:35:44.522995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:49.349 [2024-10-01 20:35:44.523002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.349 [2024-10-01 20:35:44.523090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:49.349 [2024-10-01 20:35:44.523100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:37:49.349 [2024-10-01 20:35:44.523110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:37:49.349 [2024-10-01 20:35:44.523117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:49.349 [2024-10-01 20:35:44.524067] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2337.855 ms, result 0 00:37:49.349 { 00:37:49.349 "name": "ftl", 00:37:49.349 "uuid": "9d10aaf9-ef6a-43fb-b863-7a4397ee56b5" 00:37:49.349 } 00:37:49.349 20:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:37:49.607 [2024-10-01 20:35:44.779505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:49.607 20:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:37:49.865 20:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:37:50.124 [2024-10-01 20:35:45.143885] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:37:50.124 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:37:50.380 [2024-10-01 20:35:45.340514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:50.380 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:37:50.637 Fill FTL, iteration 1 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=78373 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 78373 /var/tmp/spdk.tgt.sock 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78373 ']' 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:50.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:50.637 20:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:50.637 [2024-10-01 20:35:45.754586] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:37:50.637 [2024-10-01 20:35:45.754715] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78373 ] 00:37:50.895 [2024-10-01 20:35:45.905353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.895 [2024-10-01 20:35:46.091244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.849 20:35:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:51.849 20:35:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:37:51.849 20:35:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:37:52.106 ftln1 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 78373 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78373 ']' 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 78373 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78373 00:37:52.106 killing process with pid 78373 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78373' 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 78373 00:37:52.106 20:35:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 78373 00:37:54.638 20:35:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:37:54.638 20:35:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:37:54.638 [2024-10-01 20:35:49.314795] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:37:54.638 [2024-10-01 20:35:49.315096] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78426 ] 00:37:54.638 [2024-10-01 20:35:49.466844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.638 [2024-10-01 20:35:49.648586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:00.298  Copying: 219/1024 [MB] (219 MBps) Copying: 485/1024 [MB] (266 MBps) Copying: 748/1024 [MB] (263 MBps) Copying: 1021/1024 [MB] (273 MBps) Copying: 1024/1024 [MB] (average 255 MBps) 00:38:00.298 00:38:00.298 Calculate MD5 checksum, iteration 1 00:38:00.298 20:35:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:38:00.298 20:35:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:38:00.298 20:35:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:00.298 20:35:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:00.298 20:35:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:00.298 20:35:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:00.298 20:35:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:00.298 20:35:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:00.298 [2024-10-01 20:35:55.278410] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:38:00.298 [2024-10-01 20:35:55.278559] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78484 ] 00:38:00.298 [2024-10-01 20:35:55.431904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.555 [2024-10-01 20:35:55.591753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:03.476  Copying: 671/1024 [MB] (671 MBps) Copying: 1024/1024 [MB] (average 669 MBps) 00:38:03.476 00:38:03.477 20:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:38:03.477 20:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:06.001 20:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:38:06.001 Fill FTL, iteration 2 00:38:06.001 20:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a70a822c781c067a5d333bd36b083463 00:38:06.001 20:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:38:06.001 20:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:38:06.001 20:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:38:06.001 20:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:38:06.001 20:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:06.001 20:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:06.001 20:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:06.001 20:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:06.001 20:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:38:06.001 [2024-10-01 20:36:00.690541] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:38:06.001 [2024-10-01 20:36:00.690849] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78553 ] 00:38:06.001 [2024-10-01 20:36:00.839490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.001 [2024-10-01 20:36:01.026090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:11.757  Copying: 216/1024 [MB] (216 MBps) Copying: 470/1024 [MB] (254 MBps) Copying: 738/1024 [MB] (268 MBps) Copying: 1009/1024 [MB] (271 MBps) Copying: 1024/1024 [MB] (average 252 MBps) 00:38:11.757 00:38:11.757 20:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:38:11.757 20:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:38:11.757 Calculate MD5 checksum, iteration 2 00:38:11.757 20:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:11.757 20:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:11.757 20:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:11.757 20:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:11.757 20:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:11.757 20:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:11.757 [2024-10-01 20:36:06.767604] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:38:11.757 [2024-10-01 20:36:06.768085] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78617 ] 00:38:11.757 [2024-10-01 20:36:06.936576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.014 [2024-10-01 20:36:07.094990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:15.543  Copying: 664/1024 [MB] (664 MBps) Copying: 1024/1024 [MB] (average 648 MBps) 00:38:15.543 00:38:15.543 20:36:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:38:15.543 20:36:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:18.085 20:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:38:18.085 20:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0ab44fbef506c44feb64521a186e33ad 00:38:18.085 20:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:38:18.085 20:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:38:18.085 20:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:38:18.085 [2024-10-01 20:36:12.931747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.085 [2024-10-01 20:36:12.931975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:38:18.085 [2024-10-01 20:36:12.931994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:38:18.085 [2024-10-01 20:36:12.932001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.085 [2024-10-01 20:36:12.932030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.085 [2024-10-01 20:36:12.932038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:38:18.085 [2024-10-01 20:36:12.932045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:38:18.085 [2024-10-01 20:36:12.932051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.085 [2024-10-01 20:36:12.932067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.085 [2024-10-01 20:36:12.932073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:38:18.085 [2024-10-01 20:36:12.932080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:38:18.085 [2024-10-01 20:36:12.932090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.085 [2024-10-01 20:36:12.932144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.390 ms, result 0 00:38:18.085 true 00:38:18.085 20:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:38:18.085 { 00:38:18.085 "name": "ftl", 00:38:18.085 "properties": [ 00:38:18.085 { 00:38:18.085 "name": "superblock_version", 00:38:18.085 "value": 5, 00:38:18.085 "read-only": true 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "name": "base_device", 00:38:18.085 "bands": [ 00:38:18.085 { 00:38:18.085 "id": 0, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 1, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 2, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 3, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 4, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 5, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 6, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 7, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 8, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 9, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 10, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 11, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 12, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 13, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 14, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 15, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 16, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 17, 00:38:18.085 "state": "FREE", 00:38:18.085 "validity": 0.0 00:38:18.085 } 00:38:18.085 ], 00:38:18.085 "read-only": true 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "name": "cache_device", 00:38:18.085 "type": "bdev", 00:38:18.085 "chunks": [ 00:38:18.085 { 00:38:18.085 "id": 0, 00:38:18.085 "state": "INACTIVE", 00:38:18.085 "utilization": 0.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 1, 00:38:18.085 "state": "CLOSED", 00:38:18.085 "utilization": 1.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 2, 00:38:18.085 "state": "CLOSED", 00:38:18.085 "utilization": 1.0 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 3, 00:38:18.085 "state": "OPEN", 00:38:18.085 "utilization": 0.001953125 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "id": 4, 00:38:18.085 "state": "OPEN", 00:38:18.085 "utilization": 0.0 00:38:18.085 } 00:38:18.085 ], 00:38:18.085 "read-only": true 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "name": "verbose_mode", 00:38:18.085 "value": true, 00:38:18.085 "unit": "", 00:38:18.085 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:38:18.085 }, 00:38:18.085 { 00:38:18.085 "name": "prep_upgrade_on_shutdown", 00:38:18.085 "value": false, 00:38:18.085 "unit": "", 00:38:18.085 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:38:18.085 } 00:38:18.085 ] 00:38:18.085 } 00:38:18.085 20:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:38:18.343 [2024-10-01 20:36:13.344085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.343 [2024-10-01 20:36:13.344137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:38:18.343 [2024-10-01 20:36:13.344148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:38:18.343 [2024-10-01 20:36:13.344155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.343 [2024-10-01 20:36:13.344175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.343 [2024-10-01 20:36:13.344182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:38:18.343 [2024-10-01 20:36:13.344188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:38:18.343 [2024-10-01 20:36:13.344194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.343 [2024-10-01 20:36:13.344209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.343 [2024-10-01 20:36:13.344216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:38:18.343 [2024-10-01 20:36:13.344222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:38:18.343 [2024-10-01 20:36:13.344228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.343 [2024-10-01 20:36:13.344274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.185 ms, result 0 00:38:18.343 true 00:38:18.343 20:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:38:18.344 20:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:38:18.344 20:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:38:18.602 20:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:38:18.602 20:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:38:18.602 20:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:38:18.860 [2024-10-01 20:36:13.815382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.861 [2024-10-01 20:36:13.815432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:38:18.861 [2024-10-01 20:36:13.815443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:38:18.861 [2024-10-01 20:36:13.815450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.861 [2024-10-01 20:36:13.815471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.861 [2024-10-01 20:36:13.815478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:38:18.861 [2024-10-01 20:36:13.815485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:38:18.861 [2024-10-01 20:36:13.815491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.861 [2024-10-01 20:36:13.815506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.861 [2024-10-01 20:36:13.815513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:38:18.861 [2024-10-01 20:36:13.815519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:38:18.861 [2024-10-01 20:36:13.815524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.861 [2024-10-01 20:36:13.815571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.183 ms, result 0 00:38:18.861 true 00:38:18.861 20:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:38:18.861 { 00:38:18.861 "name": "ftl", 00:38:18.861 "properties": [ 00:38:18.861 { 00:38:18.861 "name": "superblock_version", 00:38:18.861 "value": 5, 00:38:18.861 "read-only": true 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "name": "base_device", 00:38:18.861 "bands": [ 00:38:18.861 { 00:38:18.861 "id": 0, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 1, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 2, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 3, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 4, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 5, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 6, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 7, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 8, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 9, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 10, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 11, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 12, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 13, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 14, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 15, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 16, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 17, 00:38:18.861 "state": "FREE", 00:38:18.861 "validity": 0.0 00:38:18.861 } 00:38:18.861 ], 00:38:18.861 "read-only": true 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "name": "cache_device", 00:38:18.861 "type": "bdev", 00:38:18.861 "chunks": [ 00:38:18.861 { 00:38:18.861 "id": 0, 00:38:18.861 "state": "INACTIVE", 00:38:18.861 "utilization": 0.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 1, 00:38:18.861 "state": "CLOSED", 00:38:18.861 "utilization": 1.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 2, 00:38:18.861 "state": "CLOSED", 00:38:18.861 "utilization": 1.0 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 3, 00:38:18.861 "state": "OPEN", 00:38:18.861 "utilization": 0.001953125 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "id": 4, 00:38:18.861 "state": "OPEN", 00:38:18.861 "utilization": 0.0 00:38:18.861 } 00:38:18.861 ], 00:38:18.861 "read-only": true 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "name": "verbose_mode", 00:38:18.861 "value": true, 00:38:18.861 "unit": "", 00:38:18.861 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:38:18.861 }, 00:38:18.861 { 00:38:18.861 "name": "prep_upgrade_on_shutdown", 00:38:18.861 "value": true, 00:38:18.861 "unit": "", 00:38:18.861 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:38:18.861 } 00:38:18.861 ] 00:38:18.861 } 00:38:18.861 20:36:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:38:18.861 20:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 78262 ]] 00:38:18.861 20:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 78262 00:38:18.861 20:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78262 ']' 00:38:18.861 20:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 78262 00:38:18.861 20:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:38:18.861 20:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:18.861 20:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78262 00:38:19.119 killing process with pid 78262 00:38:19.119 20:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:19.119 20:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:19.119 20:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78262' 00:38:19.119 20:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 78262 00:38:19.119 20:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 78262 00:38:19.685 [2024-10-01 20:36:14.649798] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:38:19.685 [2024-10-01 20:36:14.661006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:19.685 [2024-10-01 20:36:14.661056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:38:19.685 [2024-10-01 20:36:14.661067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:38:19.685 [2024-10-01 20:36:14.661074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:19.685 [2024-10-01 20:36:14.661094] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:38:19.685 [2024-10-01 20:36:14.663248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:19.685 [2024-10-01 20:36:14.663281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:38:19.685 [2024-10-01 20:36:14.663290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.142 ms 00:38:19.685 [2024-10-01 20:36:14.663297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.789 [2024-10-01 20:36:22.199746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:27.789 [2024-10-01 20:36:22.199814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:38:27.789 [2024-10-01 20:36:22.199829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7536.405 ms 00:38:27.789 [2024-10-01 20:36:22.199837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.789 [2024-10-01 20:36:22.201166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:27.789 [2024-10-01 20:36:22.201187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:38:27.789 [2024-10-01 20:36:22.201196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.314 ms 00:38:27.789 [2024-10-01 20:36:22.201204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.789 [2024-10-01 20:36:22.202345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:27.789 [2024-10-01 20:36:22.202370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:38:27.789 [2024-10-01 20:36:22.202378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.111 ms 00:38:27.789 [2024-10-01 20:36:22.202386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.789 [2024-10-01 20:36:22.211989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:27.789 [2024-10-01 20:36:22.212033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:38:27.789 [2024-10-01 20:36:22.212044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.568 ms 00:38:27.789 [2024-10-01 20:36:22.212052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.789 [2024-10-01 20:36:22.218143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:27.789 [2024-10-01 20:36:22.218330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:38:27.789 [2024-10-01 20:36:22.218353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.056 ms 00:38:27.789 [2024-10-01 20:36:22.218362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.789 [2024-10-01 20:36:22.218449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:27.789 [2024-10-01 20:36:22.218459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:38:27.789 [2024-10-01 20:36:22.218468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:38:27.789 [2024-10-01 20:36:22.218475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.789 [2024-10-01 20:36:22.227381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:27.789 [2024-10-01 20:36:22.227419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:38:27.789 [2024-10-01 20:36:22.227430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.889 ms 00:38:27.789 [2024-10-01 20:36:22.227437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.789 [2024-10-01 20:36:22.236410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:27.789 [2024-10-01 20:36:22.236451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:38:27.789 [2024-10-01 20:36:22.236461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.941 ms 00:38:27.789 [2024-10-01 20:36:22.236468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.789 [2024-10-01 20:36:22.245267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:27.789 [2024-10-01 20:36:22.245441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:38:27.789 [2024-10-01 20:36:22.245456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.764 ms 00:38:27.789 [2024-10-01 20:36:22.245463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.789 [2024-10-01 20:36:22.254444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:27.789 [2024-10-01 20:36:22.254495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:38:27.789 [2024-10-01 20:36:22.254507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.889 ms 00:38:27.789 [2024-10-01 20:36:22.254514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.789 [2024-10-01 20:36:22.254550] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:38:27.789 [2024-10-01 20:36:22.254565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:38:27.789 [2024-10-01 20:36:22.254575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:38:27.789 [2024-10-01 20:36:22.254584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:38:27.789 [2024-10-01 20:36:22.254592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:27.789 [2024-10-01 20:36:22.254600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:27.790 [2024-10-01 20:36:22.254743] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:38:27.790 [2024-10-01 20:36:22.254751] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9d10aaf9-ef6a-43fb-b863-7a4397ee56b5 00:38:27.790 [2024-10-01 20:36:22.254762] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:38:27.790 [2024-10-01 20:36:22.254769] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:38:27.790 [2024-10-01 20:36:22.254776] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:38:27.790 [2024-10-01 20:36:22.254787] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:38:27.790 [2024-10-01 20:36:22.254794] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:38:27.790 [2024-10-01 20:36:22.254801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:38:27.790 [2024-10-01 20:36:22.254808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:38:27.790 [2024-10-01 20:36:22.254815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:38:27.790 [2024-10-01 20:36:22.254821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:38:27.790 [2024-10-01 20:36:22.254830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:27.790 [2024-10-01 20:36:22.254837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:38:27.790 [2024-10-01 20:36:22.254845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.281 ms 00:38:27.790 [2024-10-01 20:36:22.254854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.267470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:27.790 [2024-10-01 20:36:22.267518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:38:27.790 [2024-10-01 20:36:22.267529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.595 ms 00:38:27.790 [2024-10-01 20:36:22.267537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.267918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:27.790 [2024-10-01 20:36:22.267957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:38:27.790 [2024-10-01 20:36:22.267966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.352 ms 00:38:27.790 [2024-10-01 20:36:22.267973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.305219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:27.790 [2024-10-01 20:36:22.305260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:38:27.790 [2024-10-01 20:36:22.305272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:27.790 [2024-10-01 20:36:22.305280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.305321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:27.790 [2024-10-01 20:36:22.305330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:38:27.790 [2024-10-01 20:36:22.305338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:27.790 [2024-10-01 20:36:22.305347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.305423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:27.790 [2024-10-01 20:36:22.305433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:38:27.790 [2024-10-01 20:36:22.305441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:27.790 [2024-10-01 20:36:22.305448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.305465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:27.790 [2024-10-01 20:36:22.305472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:38:27.790 [2024-10-01 20:36:22.305479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:27.790 [2024-10-01 20:36:22.305486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.383076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:27.790 [2024-10-01 20:36:22.383121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:38:27.790 [2024-10-01 20:36:22.383133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:27.790 [2024-10-01 20:36:22.383141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.446812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:27.790 [2024-10-01 20:36:22.446863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:38:27.790 [2024-10-01 20:36:22.446875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:27.790 [2024-10-01 20:36:22.446883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.446974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:27.790 [2024-10-01 20:36:22.446983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:38:27.790 [2024-10-01 20:36:22.446991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:27.790 [2024-10-01 20:36:22.446998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.447037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:27.790 [2024-10-01 20:36:22.447046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:38:27.790 [2024-10-01 20:36:22.447053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:27.790 [2024-10-01 20:36:22.447061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.447145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:27.790 [2024-10-01 20:36:22.447157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:38:27.790 [2024-10-01 20:36:22.447165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:27.790 [2024-10-01 20:36:22.447172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.447200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:27.790 [2024-10-01 20:36:22.447209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:38:27.790 [2024-10-01 20:36:22.447216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:27.790 [2024-10-01 20:36:22.447223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.447257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:27.790 [2024-10-01 20:36:22.447266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:38:27.790 [2024-10-01 20:36:22.447275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:27.790 [2024-10-01 20:36:22.447283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.447324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:27.790 [2024-10-01 20:36:22.447334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:38:27.790 [2024-10-01 20:36:22.447341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:27.790 [2024-10-01 20:36:22.447348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:27.790 [2024-10-01 20:36:22.447459] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7786.411 ms, result 0 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:38:35.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78820 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78820 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78820 ']' 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:38:35.889 20:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:35.889 [2024-10-01 20:36:30.967193] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:38:35.889 [2024-10-01 20:36:30.968014] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78820 ] 00:38:36.146 [2024-10-01 20:36:31.122806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.146 [2024-10-01 20:36:31.314743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.078 [2024-10-01 20:36:32.196382] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:38:37.078 [2024-10-01 20:36:32.196456] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:38:37.337 [2024-10-01 20:36:32.340939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:37.337 [2024-10-01 20:36:32.341169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:38:37.337 [2024-10-01 20:36:32.341188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:38:37.337 [2024-10-01 20:36:32.341197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.337 [2024-10-01 20:36:32.341264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:37.337 [2024-10-01 20:36:32.341275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:38:37.337 [2024-10-01 20:36:32.341284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:38:37.337 [2024-10-01 20:36:32.341291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.337 [2024-10-01 20:36:32.341321] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:38:37.337 [2024-10-01 20:36:32.342113] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:38:37.337 [2024-10-01 20:36:32.342136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:37.337 [2024-10-01 20:36:32.342147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:38:37.337 [2024-10-01 20:36:32.342156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.824 ms 00:38:37.337 [2024-10-01 20:36:32.342163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.337 [2024-10-01 20:36:32.343480] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:38:37.337 [2024-10-01 20:36:32.356278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:37.337 [2024-10-01 20:36:32.356470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:38:37.337 [2024-10-01 20:36:32.356489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.798 ms 00:38:37.337 [2024-10-01 20:36:32.356498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.337 [2024-10-01 20:36:32.356578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:37.337 [2024-10-01 20:36:32.356588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:38:37.337 [2024-10-01 20:36:32.356600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:38:37.337 [2024-10-01 20:36:32.356607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.338 [2024-10-01 20:36:32.362424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:37.338 [2024-10-01 20:36:32.362466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:38:37.338 [2024-10-01 20:36:32.362476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.751 ms 00:38:37.338 [2024-10-01 20:36:32.362484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.338 [2024-10-01 20:36:32.362553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:37.338 [2024-10-01 20:36:32.362565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:38:37.338 [2024-10-01 20:36:32.362573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:38:37.338 [2024-10-01 20:36:32.362580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.338 [2024-10-01 20:36:32.362651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:37.338 [2024-10-01 20:36:32.362662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:38:37.338 [2024-10-01 20:36:32.362669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:38:37.338 [2024-10-01 20:36:32.362676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.338 [2024-10-01 20:36:32.362719] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:38:37.338 [2024-10-01 20:36:32.366032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:37.338 [2024-10-01 20:36:32.366065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:38:37.338 [2024-10-01 20:36:32.366084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.320 ms 00:38:37.338 [2024-10-01 20:36:32.366091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.338 [2024-10-01 20:36:32.366121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:37.338 [2024-10-01 20:36:32.366133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:38:37.338 [2024-10-01 20:36:32.366141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:38:37.338 [2024-10-01 20:36:32.366148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.338 [2024-10-01 20:36:32.366171] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:38:37.338 [2024-10-01 20:36:32.366190] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:38:37.338 [2024-10-01 20:36:32.366224] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:38:37.338 [2024-10-01 20:36:32.366239] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:38:37.338 [2024-10-01 20:36:32.366344] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:38:37.338 [2024-10-01 20:36:32.366355] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:38:37.338 [2024-10-01 20:36:32.366365] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:38:37.338 [2024-10-01 20:36:32.366375] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:38:37.338 [2024-10-01 20:36:32.366383] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:38:37.338 [2024-10-01 20:36:32.366391] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:38:37.338 [2024-10-01 20:36:32.366398] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:38:37.338 [2024-10-01 20:36:32.366406] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:38:37.338 [2024-10-01 20:36:32.366413] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:38:37.338 [2024-10-01 20:36:32.366420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:37.338 [2024-10-01 20:36:32.366427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:38:37.338 [2024-10-01 20:36:32.366438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.252 ms 00:38:37.338 [2024-10-01 20:36:32.366444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.338 [2024-10-01 20:36:32.366529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:37.338 [2024-10-01 20:36:32.366537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:38:37.338 [2024-10-01 20:36:32.366544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:38:37.338 [2024-10-01 20:36:32.366552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.338 [2024-10-01 20:36:32.366669] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:38:37.338 [2024-10-01 20:36:32.366679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:38:37.338 [2024-10-01 20:36:32.366687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:38:37.338 [2024-10-01 20:36:32.366715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:37.338 [2024-10-01 20:36:32.366722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:38:37.338 [2024-10-01 20:36:32.366729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:38:37.338 [2024-10-01 20:36:32.366736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:38:37.338 [2024-10-01 20:36:32.366743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:38:37.338 [2024-10-01 20:36:32.366751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:38:37.338 [2024-10-01 20:36:32.366758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:37.338 [2024-10-01 20:36:32.366764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:38:37.338 [2024-10-01 20:36:32.366771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:38:37.338 [2024-10-01 20:36:32.366780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:37.338 [2024-10-01 20:36:32.366787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:38:37.338 [2024-10-01 20:36:32.366794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:38:37.338 [2024-10-01 20:36:32.366801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:37.338 [2024-10-01 20:36:32.366808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:38:37.338 [2024-10-01 20:36:32.366814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:38:37.338 [2024-10-01 20:36:32.366821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:37.338 [2024-10-01 20:36:32.366827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:38:37.338 [2024-10-01 20:36:32.366834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:38:37.338 [2024-10-01 20:36:32.366840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:37.338 [2024-10-01 20:36:32.366847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:38:37.338 [2024-10-01 20:36:32.366853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:38:37.338 [2024-10-01 20:36:32.366866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:37.338 [2024-10-01 20:36:32.366872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:38:37.338 [2024-10-01 20:36:32.366879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:38:37.338 [2024-10-01 20:36:32.366885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:37.338 [2024-10-01 20:36:32.366892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:38:37.338 [2024-10-01 20:36:32.366898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:38:37.338 [2024-10-01 20:36:32.366904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:37.338 [2024-10-01 20:36:32.366910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:38:37.338 [2024-10-01 20:36:32.366917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:38:37.338 [2024-10-01 20:36:32.366923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:37.338 [2024-10-01 20:36:32.366929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:38:37.338 [2024-10-01 20:36:32.366936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:38:37.338 [2024-10-01 20:36:32.366942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:37.338 [2024-10-01 20:36:32.366949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:38:37.338 [2024-10-01 20:36:32.366955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:38:37.338 [2024-10-01 20:36:32.366962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:37.338 [2024-10-01 20:36:32.366968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:38:37.338 [2024-10-01 20:36:32.366974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:38:37.338 [2024-10-01 20:36:32.366981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:37.339 [2024-10-01 20:36:32.366987] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:38:37.339 [2024-10-01 20:36:32.366995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:38:37.339 [2024-10-01 20:36:32.367003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:38:37.339 [2024-10-01 20:36:32.367010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:37.339 [2024-10-01 20:36:32.367018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:38:37.339 [2024-10-01 20:36:32.367025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:38:37.339 [2024-10-01 20:36:32.367032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:38:37.339 [2024-10-01 20:36:32.367039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:38:37.339 [2024-10-01 20:36:32.367045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:38:37.339 [2024-10-01 20:36:32.367052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:38:37.339 [2024-10-01 20:36:32.367060] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:38:37.339 [2024-10-01 20:36:32.367070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:37.339 [2024-10-01 20:36:32.367078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:38:37.339 [2024-10-01 20:36:32.367085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:38:37.339 [2024-10-01 20:36:32.367092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:38:37.339 [2024-10-01 20:36:32.367098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:38:37.339 [2024-10-01 20:36:32.367105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:38:37.339 [2024-10-01 20:36:32.367112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:38:37.339 [2024-10-01 20:36:32.367119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:38:37.339 [2024-10-01 20:36:32.367126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:38:37.339 [2024-10-01 20:36:32.367134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:38:37.339 [2024-10-01 20:36:32.367140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:38:37.339 [2024-10-01 20:36:32.367147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:38:37.339 [2024-10-01 20:36:32.367154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:38:37.339 [2024-10-01 20:36:32.367161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:38:37.339 [2024-10-01 20:36:32.367168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:38:37.339 [2024-10-01 20:36:32.367175] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:38:37.339 [2024-10-01 20:36:32.367183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:37.339 [2024-10-01 20:36:32.367191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:37.339 [2024-10-01 20:36:32.367198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:38:37.339 [2024-10-01 20:36:32.367205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:38:37.339 [2024-10-01 20:36:32.367212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:38:37.339 [2024-10-01 20:36:32.367220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:37.339 [2024-10-01 20:36:32.367229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:38:37.339 [2024-10-01 20:36:32.367237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.620 ms 00:38:37.339 [2024-10-01 20:36:32.367244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.339 [2024-10-01 20:36:32.367286] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:38:37.339 [2024-10-01 20:36:32.367301] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:38:39.864 [2024-10-01 20:36:34.684241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.864 [2024-10-01 20:36:34.684304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:38:39.864 [2024-10-01 20:36:34.684328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2316.947 ms 00:38:39.864 [2024-10-01 20:36:34.684337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.864 [2024-10-01 20:36:34.711837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.864 [2024-10-01 20:36:34.711893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:38:39.864 [2024-10-01 20:36:34.711907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.283 ms 00:38:39.864 [2024-10-01 20:36:34.711916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.864 [2024-10-01 20:36:34.712020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.864 [2024-10-01 20:36:34.712031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:38:39.864 [2024-10-01 20:36:34.712040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:38:39.864 [2024-10-01 20:36:34.712048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.864 [2024-10-01 20:36:34.747094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.864 [2024-10-01 20:36:34.747149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:38:39.864 [2024-10-01 20:36:34.747161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.007 ms 00:38:39.864 [2024-10-01 20:36:34.747169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.864 [2024-10-01 20:36:34.747224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.864 [2024-10-01 20:36:34.747239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:38:39.864 [2024-10-01 20:36:34.747253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:38:39.864 [2024-10-01 20:36:34.747267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.864 [2024-10-01 20:36:34.747725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.864 [2024-10-01 20:36:34.747743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:38:39.864 [2024-10-01 20:36:34.747753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.351 ms 00:38:39.864 [2024-10-01 20:36:34.747764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.864 [2024-10-01 20:36:34.747829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.864 [2024-10-01 20:36:34.747844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:38:39.864 [2024-10-01 20:36:34.747857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:38:39.864 [2024-10-01 20:36:34.747870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.864 [2024-10-01 20:36:34.761970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.762251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:38:39.865 [2024-10-01 20:36:34.762271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.066 ms 00:38:39.865 [2024-10-01 20:36:34.762280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.774940] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:38:39.865 [2024-10-01 20:36:34.774989] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:38:39.865 [2024-10-01 20:36:34.775005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.775013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:38:39.865 [2024-10-01 20:36:34.775023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.589 ms 00:38:39.865 [2024-10-01 20:36:34.775030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.789208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.789384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:38:39.865 [2024-10-01 20:36:34.789402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.122 ms 00:38:39.865 [2024-10-01 20:36:34.789415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.800924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.800964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:38:39.865 [2024-10-01 20:36:34.800975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.463 ms 00:38:39.865 [2024-10-01 20:36:34.800983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.812347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.812500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:38:39.865 [2024-10-01 20:36:34.812516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.322 ms 00:38:39.865 [2024-10-01 20:36:34.812524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.813196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.813215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:38:39.865 [2024-10-01 20:36:34.813224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.568 ms 00:38:39.865 [2024-10-01 20:36:34.813231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.870310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.870367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:38:39.865 [2024-10-01 20:36:34.870380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 57.057 ms 00:38:39.865 [2024-10-01 20:36:34.870389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.881617] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:38:39.865 [2024-10-01 20:36:34.882566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.882597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:38:39.865 [2024-10-01 20:36:34.882610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.116 ms 00:38:39.865 [2024-10-01 20:36:34.882617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.882740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.882752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:38:39.865 [2024-10-01 20:36:34.882761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:38:39.865 [2024-10-01 20:36:34.882768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.882823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.882833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:38:39.865 [2024-10-01 20:36:34.882843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:38:39.865 [2024-10-01 20:36:34.882851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.882872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.882880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:38:39.865 [2024-10-01 20:36:34.882888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:38:39.865 [2024-10-01 20:36:34.882895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.882927] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:38:39.865 [2024-10-01 20:36:34.882937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.882944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:38:39.865 [2024-10-01 20:36:34.882952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:38:39.865 [2024-10-01 20:36:34.882962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.907300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.907489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:38:39.865 [2024-10-01 20:36:34.907543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.318 ms 00:38:39.865 [2024-10-01 20:36:34.907566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.907673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:39.865 [2024-10-01 20:36:34.907722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:38:39.865 [2024-10-01 20:36:34.907828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:38:39.865 [2024-10-01 20:36:34.907848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:39.865 [2024-10-01 20:36:34.909056] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2567.683 ms, result 0 00:38:39.865 [2024-10-01 20:36:34.923906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:39.865 [2024-10-01 20:36:34.939898] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:38:39.865 [2024-10-01 20:36:34.948336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:40.122 20:36:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:40.122 20:36:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:38:40.122 20:36:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:38:40.122 20:36:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:38:40.122 20:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:38:40.381 [2024-10-01 20:36:35.424718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:40.381 [2024-10-01 20:36:35.424767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:38:40.381 [2024-10-01 20:36:35.424779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:38:40.381 [2024-10-01 20:36:35.424788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:40.381 [2024-10-01 20:36:35.424810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:40.381 [2024-10-01 20:36:35.424820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:38:40.381 [2024-10-01 20:36:35.424828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:38:40.381 [2024-10-01 20:36:35.424835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:40.381 [2024-10-01 20:36:35.424858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:40.381 [2024-10-01 20:36:35.424866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:38:40.381 [2024-10-01 20:36:35.424873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:38:40.381 [2024-10-01 20:36:35.424881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:40.381 [2024-10-01 20:36:35.424938] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.221 ms, result 0 00:38:40.381 true 00:38:40.381 20:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:38:40.640 { 00:38:40.640 "name": "ftl", 00:38:40.640 "properties": [ 00:38:40.640 { 00:38:40.640 "name": "superblock_version", 00:38:40.640 "value": 5, 00:38:40.640 "read-only": true 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "name": "base_device", 00:38:40.640 "bands": [ 00:38:40.640 { 00:38:40.640 "id": 0, 00:38:40.640 "state": "CLOSED", 00:38:40.640 "validity": 1.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 1, 00:38:40.640 "state": "CLOSED", 00:38:40.640 "validity": 1.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 2, 00:38:40.640 "state": "CLOSED", 00:38:40.640 "validity": 0.007843137254901933 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 3, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 4, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 5, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 6, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 7, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 8, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 9, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 10, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 11, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 12, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 13, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 14, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 15, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 16, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 17, 00:38:40.640 "state": "FREE", 00:38:40.640 "validity": 0.0 00:38:40.640 } 00:38:40.640 ], 00:38:40.640 "read-only": true 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "name": "cache_device", 00:38:40.640 "type": "bdev", 00:38:40.640 "chunks": [ 00:38:40.640 { 00:38:40.640 "id": 0, 00:38:40.640 "state": "INACTIVE", 00:38:40.640 "utilization": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 1, 00:38:40.640 "state": "OPEN", 00:38:40.640 "utilization": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 2, 00:38:40.640 "state": "OPEN", 00:38:40.640 "utilization": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 3, 00:38:40.640 "state": "FREE", 00:38:40.640 "utilization": 0.0 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "id": 4, 00:38:40.640 "state": "FREE", 00:38:40.640 "utilization": 0.0 00:38:40.640 } 00:38:40.640 ], 00:38:40.640 "read-only": true 00:38:40.640 }, 00:38:40.640 { 00:38:40.640 "name": "verbose_mode", 00:38:40.640 "value": true, 00:38:40.641 "unit": "", 00:38:40.641 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:38:40.641 }, 00:38:40.641 { 00:38:40.641 "name": "prep_upgrade_on_shutdown", 00:38:40.641 "value": false, 00:38:40.641 "unit": "", 00:38:40.641 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:38:40.641 } 00:38:40.641 ] 00:38:40.641 } 00:38:40.641 20:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:38:40.641 20:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:38:40.641 20:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:38:40.641 20:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:38:40.641 20:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:38:40.899 20:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:38:40.899 20:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:38:40.899 20:36:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:38:40.899 Validate MD5 checksum, iteration 1 00:38:40.899 20:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:38:40.899 20:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:38:40.899 20:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:38:40.899 20:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:38:40.899 20:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:38:40.899 20:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:40.899 20:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:38:40.899 20:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:40.899 20:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:40.899 20:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:40.899 20:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:40.899 20:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:40.899 20:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:41.158 [2024-10-01 20:36:36.126096] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:38:41.158 [2024-10-01 20:36:36.126407] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78889 ] 00:38:41.158 [2024-10-01 20:36:36.268118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.416 [2024-10-01 20:36:36.459094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:45.039  Copying: 710/1024 [MB] (710 MBps) Copying: 1024/1024 [MB] (average 688 MBps) 00:38:45.039 00:38:45.039 20:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:38:45.039 20:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:46.937 20:36:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:46.937 Validate MD5 checksum, iteration 2 00:38:46.937 20:36:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a70a822c781c067a5d333bd36b083463 00:38:46.937 20:36:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a70a822c781c067a5d333bd36b083463 != \a\7\0\a\8\2\2\c\7\8\1\c\0\6\7\a\5\d\3\3\3\b\d\3\6\b\0\8\3\4\6\3 ]] 00:38:46.937 20:36:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:46.937 20:36:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:46.937 20:36:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:38:46.937 20:36:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:46.937 20:36:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:46.937 20:36:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:46.937 20:36:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:46.937 20:36:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:46.937 20:36:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:47.195 [2024-10-01 20:36:42.163269] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:38:47.195 [2024-10-01 20:36:42.163387] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78956 ] 00:38:47.196 [2024-10-01 20:36:42.311813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.453 [2024-10-01 20:36:42.526641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:53.859  Copying: 699/1024 [MB] (699 MBps) Copying: 1024/1024 [MB] (average 708 MBps) 00:38:53.859 00:38:53.859 20:36:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:38:53.859 20:36:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0ab44fbef506c44feb64521a186e33ad 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0ab44fbef506c44feb64521a186e33ad != \0\a\b\4\4\f\b\e\f\5\0\6\c\4\4\f\e\b\6\4\5\2\1\a\1\8\6\e\3\3\a\d ]] 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 78820 ]] 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 78820 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79052 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:38:55.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79052 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 79052 ']' 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:55.760 20:36:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:38:55.760 [2024-10-01 20:36:50.884620] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:38:55.760 [2024-10-01 20:36:50.885045] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79052 ] 00:38:56.018 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 78820 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:38:56.019 [2024-10-01 20:36:51.019430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:56.019 [2024-10-01 20:36:51.180440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:56.955 [2024-10-01 20:36:51.929611] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:38:56.955 [2024-10-01 20:36:51.929674] bdev.c:8310:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:38:56.955 [2024-10-01 20:36:52.073220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.955 [2024-10-01 20:36:52.073279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:38:56.955 [2024-10-01 20:36:52.073290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:38:56.955 [2024-10-01 20:36:52.073296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.955 [2024-10-01 20:36:52.073346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.955 [2024-10-01 20:36:52.073354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:38:56.955 [2024-10-01 20:36:52.073362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:38:56.955 [2024-10-01 20:36:52.073368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.955 [2024-10-01 20:36:52.073392] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:38:56.955 [2024-10-01 20:36:52.074045] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:38:56.955 [2024-10-01 20:36:52.074081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.955 [2024-10-01 20:36:52.074090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:38:56.955 [2024-10-01 20:36:52.074097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.697 ms 00:38:56.955 [2024-10-01 20:36:52.074103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.955 [2024-10-01 20:36:52.074390] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:38:56.955 [2024-10-01 20:36:52.087847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.955 [2024-10-01 20:36:52.087897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:38:56.955 [2024-10-01 20:36:52.087908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.457 ms 00:38:56.955 [2024-10-01 20:36:52.087915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.955 [2024-10-01 20:36:52.095407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.955 [2024-10-01 20:36:52.095594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:38:56.955 [2024-10-01 20:36:52.095609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:38:56.955 [2024-10-01 20:36:52.095616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.955 [2024-10-01 20:36:52.095965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.955 [2024-10-01 20:36:52.095983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:38:56.955 [2024-10-01 20:36:52.095991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.234 ms 00:38:56.955 [2024-10-01 20:36:52.095997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.955 [2024-10-01 20:36:52.096043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.955 [2024-10-01 20:36:52.096051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:38:56.955 [2024-10-01 20:36:52.096058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:38:56.955 [2024-10-01 20:36:52.096065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.955 [2024-10-01 20:36:52.096088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.955 [2024-10-01 20:36:52.096097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:38:56.955 [2024-10-01 20:36:52.096103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:38:56.955 [2024-10-01 20:36:52.096109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.955 [2024-10-01 20:36:52.096128] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:38:56.955 [2024-10-01 20:36:52.099138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.955 [2024-10-01 20:36:52.099168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:38:56.955 [2024-10-01 20:36:52.099176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.016 ms 00:38:56.955 [2024-10-01 20:36:52.099182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.955 [2024-10-01 20:36:52.099212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.955 [2024-10-01 20:36:52.099219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:38:56.955 [2024-10-01 20:36:52.099226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:38:56.955 [2024-10-01 20:36:52.099232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.955 [2024-10-01 20:36:52.099249] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:38:56.955 [2024-10-01 20:36:52.099267] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:38:56.955 [2024-10-01 20:36:52.099295] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:38:56.955 [2024-10-01 20:36:52.099307] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:38:56.955 [2024-10-01 20:36:52.099390] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:38:56.955 [2024-10-01 20:36:52.099398] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:38:56.955 [2024-10-01 20:36:52.099406] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:38:56.955 [2024-10-01 20:36:52.099414] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:38:56.955 [2024-10-01 20:36:52.099423] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:38:56.955 [2024-10-01 20:36:52.099429] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:38:56.955 [2024-10-01 20:36:52.099435] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:38:56.955 [2024-10-01 20:36:52.099441] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:38:56.955 [2024-10-01 20:36:52.099447] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:38:56.955 [2024-10-01 20:36:52.099453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.955 [2024-10-01 20:36:52.099459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:38:56.955 [2024-10-01 20:36:52.099465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:38:56.955 [2024-10-01 20:36:52.099471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.955 [2024-10-01 20:36:52.099538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.955 [2024-10-01 20:36:52.099544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:38:56.955 [2024-10-01 20:36:52.099552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:38:56.955 [2024-10-01 20:36:52.099558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.955 [2024-10-01 20:36:52.099656] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:38:56.955 [2024-10-01 20:36:52.099664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:38:56.955 [2024-10-01 20:36:52.099670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:38:56.955 [2024-10-01 20:36:52.099676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:56.955 [2024-10-01 20:36:52.099683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:38:56.955 [2024-10-01 20:36:52.099688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:38:56.955 [2024-10-01 20:36:52.099713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:38:56.955 [2024-10-01 20:36:52.099721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:38:56.955 [2024-10-01 20:36:52.099731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:38:56.955 [2024-10-01 20:36:52.099739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:56.955 [2024-10-01 20:36:52.099745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:38:56.955 [2024-10-01 20:36:52.099751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:38:56.955 [2024-10-01 20:36:52.099757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:56.955 [2024-10-01 20:36:52.099763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:38:56.956 [2024-10-01 20:36:52.099769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:38:56.956 [2024-10-01 20:36:52.099774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:56.956 [2024-10-01 20:36:52.099780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:38:56.956 [2024-10-01 20:36:52.099785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:38:56.956 [2024-10-01 20:36:52.099790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:56.956 [2024-10-01 20:36:52.099795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:38:56.956 [2024-10-01 20:36:52.099800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:38:56.956 [2024-10-01 20:36:52.099805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:56.956 [2024-10-01 20:36:52.099816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:38:56.956 [2024-10-01 20:36:52.099822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:38:56.956 [2024-10-01 20:36:52.099827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:56.956 [2024-10-01 20:36:52.099831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:38:56.956 [2024-10-01 20:36:52.099837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:38:56.956 [2024-10-01 20:36:52.099842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:56.956 [2024-10-01 20:36:52.099847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:38:56.956 [2024-10-01 20:36:52.099852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:38:56.956 [2024-10-01 20:36:52.099857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:56.956 [2024-10-01 20:36:52.099862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:38:56.956 [2024-10-01 20:36:52.099867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:38:56.956 [2024-10-01 20:36:52.099872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:56.956 [2024-10-01 20:36:52.099877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:38:56.956 [2024-10-01 20:36:52.099882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:38:56.956 [2024-10-01 20:36:52.099887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:56.956 [2024-10-01 20:36:52.099892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:38:56.956 [2024-10-01 20:36:52.099898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:38:56.956 [2024-10-01 20:36:52.099903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:56.956 [2024-10-01 20:36:52.099912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:38:56.956 [2024-10-01 20:36:52.099917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:38:56.956 [2024-10-01 20:36:52.099923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:56.956 [2024-10-01 20:36:52.099928] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:38:56.956 [2024-10-01 20:36:52.099934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:38:56.956 [2024-10-01 20:36:52.099941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:38:56.956 [2024-10-01 20:36:52.099948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:56.956 [2024-10-01 20:36:52.099954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:38:56.956 [2024-10-01 20:36:52.099960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:38:56.956 [2024-10-01 20:36:52.099965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:38:56.956 [2024-10-01 20:36:52.099970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:38:56.956 [2024-10-01 20:36:52.099975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:38:56.956 [2024-10-01 20:36:52.099980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:38:56.956 [2024-10-01 20:36:52.099987] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:38:56.956 [2024-10-01 20:36:52.099994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:56.956 [2024-10-01 20:36:52.100000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:38:56.956 [2024-10-01 20:36:52.100006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:38:56.956 [2024-10-01 20:36:52.100012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:38:56.956 [2024-10-01 20:36:52.100018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:38:56.956 [2024-10-01 20:36:52.100023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:38:56.956 [2024-10-01 20:36:52.100029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:38:56.956 [2024-10-01 20:36:52.100034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:38:56.956 [2024-10-01 20:36:52.100039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:38:56.956 [2024-10-01 20:36:52.100045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:38:56.956 [2024-10-01 20:36:52.100050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:38:56.956 [2024-10-01 20:36:52.100055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:38:56.956 [2024-10-01 20:36:52.100061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:38:56.956 [2024-10-01 20:36:52.100066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:38:56.956 [2024-10-01 20:36:52.100072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:38:56.956 [2024-10-01 20:36:52.100078] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:38:56.956 [2024-10-01 20:36:52.100084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:56.956 [2024-10-01 20:36:52.100091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:56.956 [2024-10-01 20:36:52.100100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:38:56.956 [2024-10-01 20:36:52.100106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:38:56.956 [2024-10-01 20:36:52.100111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:38:56.956 [2024-10-01 20:36:52.100117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.956 [2024-10-01 20:36:52.100123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:38:56.956 [2024-10-01 20:36:52.100129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:38:56.956 [2024-10-01 20:36:52.100135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.956 [2024-10-01 20:36:52.121806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.956 [2024-10-01 20:36:52.121985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:38:56.956 [2024-10-01 20:36:52.121999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.627 ms 00:38:56.956 [2024-10-01 20:36:52.122006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.956 [2024-10-01 20:36:52.122056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.956 [2024-10-01 20:36:52.122067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:38:56.956 [2024-10-01 20:36:52.122073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:38:56.956 [2024-10-01 20:36:52.122079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.956 [2024-10-01 20:36:52.147779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.956 [2024-10-01 20:36:52.147821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:38:56.956 [2024-10-01 20:36:52.147831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.642 ms 00:38:56.956 [2024-10-01 20:36:52.147840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.956 [2024-10-01 20:36:52.147873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.956 [2024-10-01 20:36:52.147880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:38:56.956 [2024-10-01 20:36:52.147887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:38:56.956 [2024-10-01 20:36:52.147893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.956 [2024-10-01 20:36:52.147989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.956 [2024-10-01 20:36:52.147997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:38:56.956 [2024-10-01 20:36:52.148004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:38:56.956 [2024-10-01 20:36:52.148010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.956 [2024-10-01 20:36:52.148046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.956 [2024-10-01 20:36:52.148052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:38:56.956 [2024-10-01 20:36:52.148058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:38:56.956 [2024-10-01 20:36:52.148064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.956 [2024-10-01 20:36:52.159757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.956 [2024-10-01 20:36:52.159796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:38:56.956 [2024-10-01 20:36:52.159805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.674 ms 00:38:56.956 [2024-10-01 20:36:52.159812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:56.956 [2024-10-01 20:36:52.159919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:56.956 [2024-10-01 20:36:52.159928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:38:56.956 [2024-10-01 20:36:52.159935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:38:56.956 [2024-10-01 20:36:52.159943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:57.216 [2024-10-01 20:36:52.173144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:57.216 [2024-10-01 20:36:52.173278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:38:57.216 [2024-10-01 20:36:52.173297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.185 ms 00:38:57.216 [2024-10-01 20:36:52.173306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:57.216 [2024-10-01 20:36:52.180975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:57.216 [2024-10-01 20:36:52.181062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:38:57.216 [2024-10-01 20:36:52.181109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.447 ms 00:38:57.216 [2024-10-01 20:36:52.181127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:57.216 [2024-10-01 20:36:52.228133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:57.216 [2024-10-01 20:36:52.228304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:38:57.216 [2024-10-01 20:36:52.228349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.933 ms 00:38:57.216 [2024-10-01 20:36:52.228367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:57.216 [2024-10-01 20:36:52.228515] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:38:57.216 [2024-10-01 20:36:52.228633] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:38:57.216 [2024-10-01 20:36:52.228781] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:38:57.216 [2024-10-01 20:36:52.228898] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:38:57.216 [2024-10-01 20:36:52.228929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:57.216 [2024-10-01 20:36:52.228972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:38:57.216 [2024-10-01 20:36:52.228991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.512 ms 00:38:57.216 [2024-10-01 20:36:52.229005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:57.216 [2024-10-01 20:36:52.229078] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:38:57.216 [2024-10-01 20:36:52.229482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:57.216 [2024-10-01 20:36:52.229535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:38:57.216 [2024-10-01 20:36:52.229556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.402 ms 00:38:57.216 [2024-10-01 20:36:52.229593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:57.216 [2024-10-01 20:36:52.242216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:57.216 [2024-10-01 20:36:52.242334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:38:57.216 [2024-10-01 20:36:52.242376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.577 ms 00:38:57.216 [2024-10-01 20:36:52.242395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:57.216 [2024-10-01 20:36:52.249438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:57.216 [2024-10-01 20:36:52.249534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:38:57.216 [2024-10-01 20:36:52.249578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:38:57.216 [2024-10-01 20:36:52.249595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:57.216 [2024-10-01 20:36:52.249688] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:38:57.216 [2024-10-01 20:36:52.249838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:57.216 [2024-10-01 20:36:52.249862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:38:57.216 [2024-10-01 20:36:52.249912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.151 ms 00:38:57.216 [2024-10-01 20:36:52.249929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:57.784 [2024-10-01 20:36:52.688246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:57.784 [2024-10-01 20:36:52.688454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:38:57.784 [2024-10-01 20:36:52.688559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 437.577 ms 00:38:57.784 [2024-10-01 20:36:52.688586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:57.784 [2024-10-01 20:36:52.692621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:57.784 [2024-10-01 20:36:52.692764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:38:57.784 [2024-10-01 20:36:52.692828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.945 ms 00:38:57.784 [2024-10-01 20:36:52.692887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:57.784 [2024-10-01 20:36:52.693335] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:38:57.784 [2024-10-01 20:36:52.693442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:57.784 [2024-10-01 20:36:52.693545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:38:57.784 [2024-10-01 20:36:52.693570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.511 ms 00:38:57.784 [2024-10-01 20:36:52.693594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:57.784 [2024-10-01 20:36:52.693636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:57.784 [2024-10-01 20:36:52.693702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:38:57.784 [2024-10-01 20:36:52.693728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:38:57.784 [2024-10-01 20:36:52.693747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:57.784 [2024-10-01 20:36:52.693799] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 444.102 ms, result 0 00:38:57.784 [2024-10-01 20:36:52.693903] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:38:57.784 [2024-10-01 20:36:52.694017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:57.784 [2024-10-01 20:36:52.694043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:38:57.784 [2024-10-01 20:36:52.694079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.115 ms 00:38:57.784 [2024-10-01 20:36:52.694139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.043 [2024-10-01 20:36:53.113978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.043 [2024-10-01 20:36:53.114158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:38:58.043 [2024-10-01 20:36:53.114225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 418.889 ms 00:38:58.043 [2024-10-01 20:36:53.114248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.043 [2024-10-01 20:36:53.118171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.043 [2024-10-01 20:36:53.118284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:38:58.043 [2024-10-01 20:36:53.118342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.856 ms 00:38:58.043 [2024-10-01 20:36:53.118365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.043 [2024-10-01 20:36:53.118662] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:38:58.043 [2024-10-01 20:36:53.118804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.043 [2024-10-01 20:36:53.118870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:38:58.043 [2024-10-01 20:36:53.118894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.400 ms 00:38:58.043 [2024-10-01 20:36:53.118948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.043 [2024-10-01 20:36:53.118992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.043 [2024-10-01 20:36:53.119015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:38:58.043 [2024-10-01 20:36:53.119062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:38:58.043 [2024-10-01 20:36:53.119083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.043 [2024-10-01 20:36:53.119164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 425.258 ms, result 0 00:38:58.043 [2024-10-01 20:36:53.119230] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:38:58.043 [2024-10-01 20:36:53.119302] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:38:58.043 [2024-10-01 20:36:53.119334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.043 [2024-10-01 20:36:53.119357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:38:58.043 [2024-10-01 20:36:53.119406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 869.660 ms 00:38:58.043 [2024-10-01 20:36:53.119453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.043 [2024-10-01 20:36:53.119499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.043 [2024-10-01 20:36:53.119548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:38:58.043 [2024-10-01 20:36:53.119570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:38:58.043 [2024-10-01 20:36:53.119589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.043 [2024-10-01 20:36:53.130676] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:38:58.043 [2024-10-01 20:36:53.130814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.043 [2024-10-01 20:36:53.130825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:38:58.043 [2024-10-01 20:36:53.130834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.197 ms 00:38:58.043 [2024-10-01 20:36:53.130845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.043 [2024-10-01 20:36:53.131507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.043 [2024-10-01 20:36:53.131532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:38:58.043 [2024-10-01 20:36:53.131541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.599 ms 00:38:58.043 [2024-10-01 20:36:53.131548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.043 [2024-10-01 20:36:53.133806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.043 [2024-10-01 20:36:53.133901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:38:58.043 [2024-10-01 20:36:53.133914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.242 ms 00:38:58.043 [2024-10-01 20:36:53.133925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.043 [2024-10-01 20:36:53.133963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.043 [2024-10-01 20:36:53.133971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:38:58.043 [2024-10-01 20:36:53.133979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:38:58.043 [2024-10-01 20:36:53.133987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.043 [2024-10-01 20:36:53.134095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.043 [2024-10-01 20:36:53.134105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:38:58.043 [2024-10-01 20:36:53.134113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:38:58.043 [2024-10-01 20:36:53.134120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.044 [2024-10-01 20:36:53.134141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.044 [2024-10-01 20:36:53.134149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:38:58.044 [2024-10-01 20:36:53.134156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:38:58.044 [2024-10-01 20:36:53.134164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.044 [2024-10-01 20:36:53.134189] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:38:58.044 [2024-10-01 20:36:53.134199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.044 [2024-10-01 20:36:53.134206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:38:58.044 [2024-10-01 20:36:53.134214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:38:58.044 [2024-10-01 20:36:53.134222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.044 [2024-10-01 20:36:53.134276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:58.044 [2024-10-01 20:36:53.134285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:38:58.044 [2024-10-01 20:36:53.134292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:38:58.044 [2024-10-01 20:36:53.134299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:58.044 [2024-10-01 20:36:53.135151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1061.503 ms, result 0 00:38:58.044 [2024-10-01 20:36:53.147492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:58.044 [2024-10-01 20:36:53.163472] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:38:58.044 [2024-10-01 20:36:53.171579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:58.302 Validate MD5 checksum, iteration 1 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:58.302 20:36:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:58.302 [2024-10-01 20:36:53.359380] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:38:58.302 [2024-10-01 20:36:53.359584] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79087 ] 00:38:58.302 [2024-10-01 20:36:53.500749] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.560 [2024-10-01 20:36:53.648440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.093  Copying: 688/1024 [MB] (688 MBps) Copying: 1024/1024 [MB] (average 676 MBps) 00:39:02.093 00:39:02.093 20:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:39:02.093 20:36:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:39:03.986 20:36:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:39:03.986 20:36:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a70a822c781c067a5d333bd36b083463 00:39:03.986 20:36:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a70a822c781c067a5d333bd36b083463 != \a\7\0\a\8\2\2\c\7\8\1\c\0\6\7\a\5\d\3\3\3\b\d\3\6\b\0\8\3\4\6\3 ]] 00:39:03.986 20:36:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:39:03.986 20:36:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:39:03.986 20:36:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:39:03.986 Validate MD5 checksum, iteration 2 00:39:03.986 20:36:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:39:03.986 20:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:39:03.986 20:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:39:03.986 20:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:39:03.986 20:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:39:03.986 20:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:39:03.986 [2024-10-01 20:36:58.863792] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:39:03.986 [2024-10-01 20:36:58.864070] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79148 ] 00:39:03.986 [2024-10-01 20:36:59.008811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:03.986 [2024-10-01 20:36:59.148989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:07.091  Copying: 694/1024 [MB] (694 MBps) Copying: 1024/1024 [MB] (average 691 MBps) 00:39:07.091 00:39:07.349 20:37:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:39:07.349 20:37:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0ab44fbef506c44feb64521a186e33ad 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0ab44fbef506c44feb64521a186e33ad != \0\a\b\4\4\f\b\e\f\5\0\6\c\4\4\f\e\b\6\4\5\2\1\a\1\8\6\e\3\3\a\d ]] 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 79052 ]] 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 79052 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 79052 ']' 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 79052 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79052 00:39:09.248 killing process with pid 79052 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79052' 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 79052 00:39:09.248 20:37:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 79052 00:39:09.814 [2024-10-01 20:37:04.983382] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:39:09.814 [2024-10-01 20:37:04.993982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:09.814 [2024-10-01 20:37:04.994020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:39:09.814 [2024-10-01 20:37:04.994039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:39:09.814 [2024-10-01 20:37:04.994045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:09.814 [2024-10-01 20:37:04.994062] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:39:09.814 [2024-10-01 20:37:04.996226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:09.814 [2024-10-01 20:37:04.996248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:39:09.814 [2024-10-01 20:37:04.996257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.153 ms 00:39:09.814 [2024-10-01 20:37:04.996263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:09.814 [2024-10-01 20:37:04.996438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:09.814 [2024-10-01 20:37:04.996446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:39:09.814 [2024-10-01 20:37:04.996453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.159 ms 00:39:09.814 [2024-10-01 20:37:04.996462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:09.814 [2024-10-01 20:37:04.997734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:09.814 [2024-10-01 20:37:04.997759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:39:09.814 [2024-10-01 20:37:04.997766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.260 ms 00:39:09.814 [2024-10-01 20:37:04.997773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:09.814 [2024-10-01 20:37:04.998665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:09.814 [2024-10-01 20:37:04.998682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:39:09.814 [2024-10-01 20:37:04.998702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.870 ms 00:39:09.814 [2024-10-01 20:37:04.998709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:09.814 [2024-10-01 20:37:05.006011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:09.814 [2024-10-01 20:37:05.006049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:39:09.814 [2024-10-01 20:37:05.006057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.273 ms 00:39:09.814 [2024-10-01 20:37:05.006063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:09.814 [2024-10-01 20:37:05.010126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:09.814 [2024-10-01 20:37:05.010151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:39:09.814 [2024-10-01 20:37:05.010159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.037 ms 00:39:09.814 [2024-10-01 20:37:05.010169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:09.814 [2024-10-01 20:37:05.010235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:09.814 [2024-10-01 20:37:05.010243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:39:09.814 [2024-10-01 20:37:05.010250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:39:09.814 [2024-10-01 20:37:05.010256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:09.814 [2024-10-01 20:37:05.017541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:09.814 [2024-10-01 20:37:05.017653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:39:09.814 [2024-10-01 20:37:05.017664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.273 ms 00:39:09.814 [2024-10-01 20:37:05.017670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.075 [2024-10-01 20:37:05.025046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:10.075 [2024-10-01 20:37:05.025070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:39:10.075 [2024-10-01 20:37:05.025078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.333 ms 00:39:10.075 [2024-10-01 20:37:05.025084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.075 [2024-10-01 20:37:05.032381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:10.075 [2024-10-01 20:37:05.032476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:39:10.075 [2024-10-01 20:37:05.032487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.273 ms 00:39:10.075 [2024-10-01 20:37:05.032493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.075 [2024-10-01 20:37:05.039986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:10.075 [2024-10-01 20:37:05.040077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:39:10.075 [2024-10-01 20:37:05.040124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.452 ms 00:39:10.075 [2024-10-01 20:37:05.040141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.075 [2024-10-01 20:37:05.040179] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:39:10.075 [2024-10-01 20:37:05.040201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:39:10.075 [2024-10-01 20:37:05.040262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:39:10.075 [2024-10-01 20:37:05.040285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:39:10.075 [2024-10-01 20:37:05.040307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:10.076 [2024-10-01 20:37:05.040796] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:39:10.076 [2024-10-01 20:37:05.040811] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9d10aaf9-ef6a-43fb-b863-7a4397ee56b5 00:39:10.076 [2024-10-01 20:37:05.040833] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:39:10.076 [2024-10-01 20:37:05.040848] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:39:10.076 [2024-10-01 20:37:05.040866] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:39:10.076 [2024-10-01 20:37:05.040903] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:39:10.076 [2024-10-01 20:37:05.040920] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:39:10.076 [2024-10-01 20:37:05.040935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:39:10.076 [2024-10-01 20:37:05.040949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:39:10.076 [2024-10-01 20:37:05.040963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:39:10.076 [2024-10-01 20:37:05.040976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:39:10.076 [2024-10-01 20:37:05.040990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:10.076 [2024-10-01 20:37:05.041005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:39:10.076 [2024-10-01 20:37:05.041049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.812 ms 00:39:10.076 [2024-10-01 20:37:05.041065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.051022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:10.076 [2024-10-01 20:37:05.051105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:39:10.076 [2024-10-01 20:37:05.051143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.932 ms 00:39:10.076 [2024-10-01 20:37:05.051160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.051456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:10.076 [2024-10-01 20:37:05.051514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:39:10.076 [2024-10-01 20:37:05.051561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:39:10.076 [2024-10-01 20:37:05.051578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.081729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:10.076 [2024-10-01 20:37:05.081820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:39:10.076 [2024-10-01 20:37:05.081860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:10.076 [2024-10-01 20:37:05.081877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.081911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:10.076 [2024-10-01 20:37:05.081927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:39:10.076 [2024-10-01 20:37:05.081941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:10.076 [2024-10-01 20:37:05.081955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.082020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:10.076 [2024-10-01 20:37:05.082379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:39:10.076 [2024-10-01 20:37:05.082432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:10.076 [2024-10-01 20:37:05.082450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.082474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:10.076 [2024-10-01 20:37:05.082490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:39:10.076 [2024-10-01 20:37:05.082504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:10.076 [2024-10-01 20:37:05.082519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.142445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:10.076 [2024-10-01 20:37:05.142577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:39:10.076 [2024-10-01 20:37:05.142614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:10.076 [2024-10-01 20:37:05.142632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.192124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:10.076 [2024-10-01 20:37:05.192247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:39:10.076 [2024-10-01 20:37:05.192285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:10.076 [2024-10-01 20:37:05.192302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.192367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:10.076 [2024-10-01 20:37:05.192390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:39:10.076 [2024-10-01 20:37:05.192405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:10.076 [2024-10-01 20:37:05.192419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.192474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:10.076 [2024-10-01 20:37:05.192492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:39:10.076 [2024-10-01 20:37:05.192507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:10.076 [2024-10-01 20:37:05.192556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.192636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:10.076 [2024-10-01 20:37:05.192774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:39:10.076 [2024-10-01 20:37:05.192851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:10.076 [2024-10-01 20:37:05.192868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.192915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:10.076 [2024-10-01 20:37:05.192936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:39:10.076 [2024-10-01 20:37:05.192951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:10.076 [2024-10-01 20:37:05.192966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.193038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:10.076 [2024-10-01 20:37:05.193056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:39:10.076 [2024-10-01 20:37:05.193071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:10.076 [2024-10-01 20:37:05.193088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.193129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:10.076 [2024-10-01 20:37:05.193147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:39:10.076 [2024-10-01 20:37:05.193194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:10.076 [2024-10-01 20:37:05.193210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:10.076 [2024-10-01 20:37:05.193313] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 199.307 ms, result 0 00:39:11.451 20:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:39:11.451 20:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:11.451 20:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:39:11.451 20:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:39:11.451 20:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:39:11.452 20:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:11.452 20:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:39:11.452 20:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:39:11.452 Remove shared memory files 00:39:11.452 20:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:39:11.452 20:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:39:11.452 20:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78820 00:39:11.452 20:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:39:11.452 20:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:39:11.452 ************************************ 00:39:11.452 END TEST ftl_upgrade_shutdown 00:39:11.452 ************************************ 00:39:11.452 00:39:11.452 real 1m27.795s 00:39:11.452 user 2m3.615s 00:39:11.452 sys 0m18.487s 00:39:11.452 20:37:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:11.452 20:37:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:39:11.452 Process with pid 73179 is not found 00:39:11.452 20:37:06 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:39:11.452 20:37:06 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:39:11.452 20:37:06 ftl -- ftl/ftl.sh@14 -- # killprocess 73179 00:39:11.452 20:37:06 ftl -- common/autotest_common.sh@950 -- # '[' -z 73179 ']' 00:39:11.452 20:37:06 ftl -- common/autotest_common.sh@954 -- # kill -0 73179 00:39:11.452 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (73179) - No such process 00:39:11.452 20:37:06 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 73179 is not found' 00:39:11.452 20:37:06 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:39:11.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.452 20:37:06 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79268 00:39:11.452 20:37:06 ftl -- ftl/ftl.sh@20 -- # waitforlisten 79268 00:39:11.452 20:37:06 ftl -- common/autotest_common.sh@831 -- # '[' -z 79268 ']' 00:39:11.452 20:37:06 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.452 20:37:06 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:11.452 20:37:06 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.452 20:37:06 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:11.452 20:37:06 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:11.452 20:37:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:39:11.452 [2024-10-01 20:37:06.377937] Starting SPDK v25.01-pre git sha1 0c2005fb5 / DPDK 24.03.0 initialization... 00:39:11.452 [2024-10-01 20:37:06.378088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79268 ] 00:39:11.452 [2024-10-01 20:37:06.526618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.710 [2024-10-01 20:37:06.685604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.276 20:37:07 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:12.276 20:37:07 ftl -- common/autotest_common.sh@864 -- # return 0 00:39:12.276 20:37:07 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:39:12.533 nvme0n1 00:39:12.533 20:37:07 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:39:12.533 20:37:07 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:39:12.533 20:37:07 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:39:12.792 20:37:07 ftl -- ftl/common.sh@28 -- # stores=97fb620c-06f1-4324-9a98-1065644a1c90 00:39:12.792 20:37:07 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:39:12.792 20:37:07 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97fb620c-06f1-4324-9a98-1065644a1c90 00:39:12.792 20:37:07 ftl -- ftl/ftl.sh@23 -- # killprocess 79268 00:39:12.792 20:37:07 ftl -- common/autotest_common.sh@950 -- # '[' -z 79268 ']' 00:39:12.792 20:37:07 ftl -- common/autotest_common.sh@954 -- # kill -0 79268 00:39:12.792 20:37:07 ftl -- common/autotest_common.sh@955 -- # uname 00:39:12.792 20:37:07 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:12.792 20:37:07 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79268 00:39:12.792 killing process with pid 79268 00:39:12.792 20:37:07 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:12.792 20:37:07 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:12.792 20:37:07 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79268' 00:39:12.792 20:37:07 ftl -- common/autotest_common.sh@969 -- # kill 79268 00:39:12.792 20:37:07 ftl -- common/autotest_common.sh@974 -- # wait 79268 00:39:14.692 20:37:09 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:14.692 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:14.692 Waiting for block devices as requested 00:39:14.692 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:39:14.692 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:14.950 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:39:14.950 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:39:20.212 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:39:20.212 Remove shared memory files 00:39:20.212 20:37:15 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:39:20.212 20:37:15 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:39:20.212 20:37:15 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:39:20.212 20:37:15 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:39:20.212 20:37:15 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:39:20.212 20:37:15 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:39:20.212 20:37:15 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:39:20.212 ************************************ 00:39:20.212 END TEST ftl 00:39:20.212 ************************************ 00:39:20.212 00:39:20.212 real 8m54.340s 00:39:20.212 user 11m14.135s 00:39:20.212 sys 1m19.805s 00:39:20.212 20:37:15 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:20.212 20:37:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:39:20.212 20:37:15 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:20.212 20:37:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:20.212 20:37:15 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:39:20.212 20:37:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:20.212 20:37:15 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:39:20.212 20:37:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:20.212 20:37:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:20.212 20:37:15 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:39:20.212 20:37:15 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:39:20.212 20:37:15 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:39:20.212 20:37:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:20.212 20:37:15 -- common/autotest_common.sh@10 -- # set +x 00:39:20.212 20:37:15 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:39:20.213 20:37:15 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:39:20.213 20:37:15 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:39:20.213 20:37:15 -- common/autotest_common.sh@10 -- # set +x 00:39:21.147 INFO: APP EXITING 00:39:21.147 INFO: killing all VMs 00:39:21.147 INFO: killing vhost app 00:39:21.147 INFO: EXIT DONE 00:39:21.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:21.663 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:39:21.663 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:39:21.663 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:39:21.663 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:39:21.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:22.179 Cleaning 00:39:22.179 Removing: /var/run/dpdk/spdk0/config 00:39:22.437 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:22.437 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:22.437 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:22.437 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:22.437 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:22.437 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:22.437 Removing: /var/run/dpdk/spdk0 00:39:22.437 Removing: /var/run/dpdk/spdk_pid57242 00:39:22.437 Removing: /var/run/dpdk/spdk_pid57444 00:39:22.437 Removing: /var/run/dpdk/spdk_pid57668 00:39:22.437 Removing: /var/run/dpdk/spdk_pid57766 00:39:22.437 Removing: /var/run/dpdk/spdk_pid57811 00:39:22.437 Removing: /var/run/dpdk/spdk_pid57939 00:39:22.437 Removing: /var/run/dpdk/spdk_pid57957 00:39:22.437 Removing: /var/run/dpdk/spdk_pid58156 00:39:22.437 Removing: /var/run/dpdk/spdk_pid58268 00:39:22.437 Removing: /var/run/dpdk/spdk_pid58369 00:39:22.437 Removing: /var/run/dpdk/spdk_pid58486 00:39:22.437 Removing: /var/run/dpdk/spdk_pid58594 00:39:22.437 Removing: /var/run/dpdk/spdk_pid58628 00:39:22.437 Removing: /var/run/dpdk/spdk_pid58670 00:39:22.437 Removing: /var/run/dpdk/spdk_pid58746 00:39:22.437 Removing: /var/run/dpdk/spdk_pid58830 00:39:22.437 Removing: /var/run/dpdk/spdk_pid59266 00:39:22.437 Removing: /var/run/dpdk/spdk_pid59330 00:39:22.437 Removing: /var/run/dpdk/spdk_pid59399 00:39:22.437 Removing: /var/run/dpdk/spdk_pid59420 00:39:22.437 Removing: /var/run/dpdk/spdk_pid59546 00:39:22.437 Removing: /var/run/dpdk/spdk_pid59562 00:39:22.437 Removing: /var/run/dpdk/spdk_pid59681 00:39:22.437 Removing: /var/run/dpdk/spdk_pid59697 00:39:22.437 Removing: /var/run/dpdk/spdk_pid59761 00:39:22.437 Removing: /var/run/dpdk/spdk_pid59779 00:39:22.437 Removing: /var/run/dpdk/spdk_pid59843 00:39:22.437 Removing: /var/run/dpdk/spdk_pid59861 00:39:22.437 Removing: /var/run/dpdk/spdk_pid60032 00:39:22.437 Removing: /var/run/dpdk/spdk_pid60068 00:39:22.437 Removing: /var/run/dpdk/spdk_pid60152 00:39:22.437 Removing: /var/run/dpdk/spdk_pid60335 00:39:22.437 Removing: /var/run/dpdk/spdk_pid60430 00:39:22.437 Removing: /var/run/dpdk/spdk_pid60477 00:39:22.437 Removing: /var/run/dpdk/spdk_pid60927 00:39:22.437 Removing: /var/run/dpdk/spdk_pid61046 00:39:22.437 Removing: /var/run/dpdk/spdk_pid61167 00:39:22.437 Removing: /var/run/dpdk/spdk_pid61220 00:39:22.437 Removing: /var/run/dpdk/spdk_pid61251 00:39:22.437 Removing: /var/run/dpdk/spdk_pid61335 00:39:22.437 Removing: /var/run/dpdk/spdk_pid61981 00:39:22.437 Removing: /var/run/dpdk/spdk_pid62024 00:39:22.437 Removing: /var/run/dpdk/spdk_pid62510 00:39:22.437 Removing: /var/run/dpdk/spdk_pid62619 00:39:22.437 Removing: /var/run/dpdk/spdk_pid62738 00:39:22.437 Removing: /var/run/dpdk/spdk_pid62792 00:39:22.437 Removing: /var/run/dpdk/spdk_pid62823 00:39:22.437 Removing: /var/run/dpdk/spdk_pid62843 00:39:22.437 Removing: /var/run/dpdk/spdk_pid64706 00:39:22.437 Removing: /var/run/dpdk/spdk_pid64849 00:39:22.437 Removing: /var/run/dpdk/spdk_pid64853 00:39:22.437 Removing: /var/run/dpdk/spdk_pid64870 00:39:22.437 Removing: /var/run/dpdk/spdk_pid64919 00:39:22.437 Removing: /var/run/dpdk/spdk_pid64923 00:39:22.437 Removing: /var/run/dpdk/spdk_pid64935 00:39:22.437 Removing: /var/run/dpdk/spdk_pid64980 00:39:22.437 Removing: /var/run/dpdk/spdk_pid64984 00:39:22.437 Removing: /var/run/dpdk/spdk_pid64996 00:39:22.437 Removing: /var/run/dpdk/spdk_pid65042 00:39:22.437 Removing: /var/run/dpdk/spdk_pid65046 00:39:22.437 Removing: /var/run/dpdk/spdk_pid65058 00:39:22.437 Removing: /var/run/dpdk/spdk_pid66415 00:39:22.437 Removing: /var/run/dpdk/spdk_pid66523 00:39:22.438 Removing: /var/run/dpdk/spdk_pid67925 00:39:22.438 Removing: /var/run/dpdk/spdk_pid69300 00:39:22.438 Removing: /var/run/dpdk/spdk_pid69400 00:39:22.438 Removing: /var/run/dpdk/spdk_pid69498 00:39:22.438 Removing: /var/run/dpdk/spdk_pid69596 00:39:22.438 Removing: /var/run/dpdk/spdk_pid69706 00:39:22.438 Removing: /var/run/dpdk/spdk_pid69786 00:39:22.438 Removing: /var/run/dpdk/spdk_pid69935 00:39:22.438 Removing: /var/run/dpdk/spdk_pid70294 00:39:22.438 Removing: /var/run/dpdk/spdk_pid70336 00:39:22.438 Removing: /var/run/dpdk/spdk_pid70787 00:39:22.438 Removing: /var/run/dpdk/spdk_pid70975 00:39:22.438 Removing: /var/run/dpdk/spdk_pid71083 00:39:22.438 Removing: /var/run/dpdk/spdk_pid71204 00:39:22.438 Removing: /var/run/dpdk/spdk_pid71264 00:39:22.438 Removing: /var/run/dpdk/spdk_pid71284 00:39:22.438 Removing: /var/run/dpdk/spdk_pid71602 00:39:22.438 Removing: /var/run/dpdk/spdk_pid71732 00:39:22.438 Removing: /var/run/dpdk/spdk_pid71817 00:39:22.438 Removing: /var/run/dpdk/spdk_pid72215 00:39:22.438 Removing: /var/run/dpdk/spdk_pid72361 00:39:22.438 Removing: /var/run/dpdk/spdk_pid73179 00:39:22.438 Removing: /var/run/dpdk/spdk_pid73312 00:39:22.438 Removing: /var/run/dpdk/spdk_pid73489 00:39:22.438 Removing: /var/run/dpdk/spdk_pid73581 00:39:22.438 Removing: /var/run/dpdk/spdk_pid73893 00:39:22.438 Removing: /var/run/dpdk/spdk_pid74142 00:39:22.438 Removing: /var/run/dpdk/spdk_pid74485 00:39:22.438 Removing: /var/run/dpdk/spdk_pid74669 00:39:22.438 Removing: /var/run/dpdk/spdk_pid74766 00:39:22.438 Removing: /var/run/dpdk/spdk_pid74823 00:39:22.438 Removing: /var/run/dpdk/spdk_pid74924 00:39:22.438 Removing: /var/run/dpdk/spdk_pid74955 00:39:22.438 Removing: /var/run/dpdk/spdk_pid75013 00:39:22.438 Removing: /var/run/dpdk/spdk_pid75184 00:39:22.438 Removing: /var/run/dpdk/spdk_pid75410 00:39:22.438 Removing: /var/run/dpdk/spdk_pid75690 00:39:22.438 Removing: /var/run/dpdk/spdk_pid75977 00:39:22.438 Removing: /var/run/dpdk/spdk_pid76266 00:39:22.438 Removing: /var/run/dpdk/spdk_pid76641 00:39:22.438 Removing: /var/run/dpdk/spdk_pid76778 00:39:22.438 Removing: /var/run/dpdk/spdk_pid76865 00:39:22.438 Removing: /var/run/dpdk/spdk_pid77255 00:39:22.438 Removing: /var/run/dpdk/spdk_pid77319 00:39:22.438 Removing: /var/run/dpdk/spdk_pid77628 00:39:22.438 Removing: /var/run/dpdk/spdk_pid77914 00:39:22.438 Removing: /var/run/dpdk/spdk_pid78262 00:39:22.438 Removing: /var/run/dpdk/spdk_pid78373 00:39:22.438 Removing: /var/run/dpdk/spdk_pid78426 00:39:22.438 Removing: /var/run/dpdk/spdk_pid78484 00:39:22.438 Removing: /var/run/dpdk/spdk_pid78553 00:39:22.438 Removing: /var/run/dpdk/spdk_pid78617 00:39:22.438 Removing: /var/run/dpdk/spdk_pid78820 00:39:22.438 Removing: /var/run/dpdk/spdk_pid78889 00:39:22.696 Removing: /var/run/dpdk/spdk_pid78956 00:39:22.696 Removing: /var/run/dpdk/spdk_pid79052 00:39:22.696 Removing: /var/run/dpdk/spdk_pid79087 00:39:22.696 Removing: /var/run/dpdk/spdk_pid79148 00:39:22.696 Removing: /var/run/dpdk/spdk_pid79268 00:39:22.696 Clean 00:39:22.696 20:37:17 -- common/autotest_common.sh@1451 -- # return 0 00:39:22.696 20:37:17 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:39:22.696 20:37:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:22.696 20:37:17 -- common/autotest_common.sh@10 -- # set +x 00:39:22.696 20:37:17 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:39:22.696 20:37:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:22.696 20:37:17 -- common/autotest_common.sh@10 -- # set +x 00:39:22.696 20:37:17 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:22.696 20:37:17 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:39:22.696 20:37:17 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:39:22.696 20:37:17 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:39:22.696 20:37:17 -- spdk/autotest.sh@394 -- # hostname 00:39:22.696 20:37:17 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:39:22.955 geninfo: WARNING: invalid characters removed from testname! 00:39:49.491 20:37:40 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:49.491 20:37:43 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:50.862 20:37:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:52.764 20:37:47 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:54.665 20:37:49 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:57.253 20:37:51 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:58.629 20:37:53 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:58.629 20:37:53 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:39:58.629 20:37:53 -- common/autotest_common.sh@1681 -- $ lcov --version 00:39:58.629 20:37:53 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:39:58.629 20:37:53 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:39:58.629 20:37:53 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:39:58.629 20:37:53 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:39:58.629 20:37:53 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:39:58.629 20:37:53 -- scripts/common.sh@336 -- $ IFS=.-: 00:39:58.629 20:37:53 -- scripts/common.sh@336 -- $ read -ra ver1 00:39:58.629 20:37:53 -- scripts/common.sh@337 -- $ IFS=.-: 00:39:58.629 20:37:53 -- scripts/common.sh@337 -- $ read -ra ver2 00:39:58.629 20:37:53 -- scripts/common.sh@338 -- $ local 'op=<' 00:39:58.629 20:37:53 -- scripts/common.sh@340 -- $ ver1_l=2 00:39:58.629 20:37:53 -- scripts/common.sh@341 -- $ ver2_l=1 00:39:58.629 20:37:53 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:39:58.629 20:37:53 -- scripts/common.sh@344 -- $ case "$op" in 00:39:58.629 20:37:53 -- scripts/common.sh@345 -- $ : 1 00:39:58.629 20:37:53 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:39:58.629 20:37:53 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:58.629 20:37:53 -- scripts/common.sh@365 -- $ decimal 1 00:39:58.629 20:37:53 -- scripts/common.sh@353 -- $ local d=1 00:39:58.629 20:37:53 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:39:58.629 20:37:53 -- scripts/common.sh@355 -- $ echo 1 00:39:58.629 20:37:53 -- scripts/common.sh@365 -- $ ver1[v]=1 00:39:58.629 20:37:53 -- scripts/common.sh@366 -- $ decimal 2 00:39:58.629 20:37:53 -- scripts/common.sh@353 -- $ local d=2 00:39:58.629 20:37:53 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:39:58.629 20:37:53 -- scripts/common.sh@355 -- $ echo 2 00:39:58.629 20:37:53 -- scripts/common.sh@366 -- $ ver2[v]=2 00:39:58.629 20:37:53 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:39:58.629 20:37:53 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:39:58.629 20:37:53 -- scripts/common.sh@368 -- $ return 0 00:39:58.629 20:37:53 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:58.629 20:37:53 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:39:58.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:58.629 --rc genhtml_branch_coverage=1 00:39:58.629 --rc genhtml_function_coverage=1 00:39:58.629 --rc genhtml_legend=1 00:39:58.629 --rc geninfo_all_blocks=1 00:39:58.629 --rc geninfo_unexecuted_blocks=1 00:39:58.629 00:39:58.629 ' 00:39:58.629 20:37:53 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:39:58.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:58.629 --rc genhtml_branch_coverage=1 00:39:58.629 --rc genhtml_function_coverage=1 00:39:58.629 --rc genhtml_legend=1 00:39:58.629 --rc geninfo_all_blocks=1 00:39:58.629 --rc geninfo_unexecuted_blocks=1 00:39:58.629 00:39:58.629 ' 00:39:58.629 20:37:53 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:39:58.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:58.629 --rc genhtml_branch_coverage=1 00:39:58.629 --rc genhtml_function_coverage=1 00:39:58.629 --rc genhtml_legend=1 00:39:58.629 --rc geninfo_all_blocks=1 00:39:58.629 --rc geninfo_unexecuted_blocks=1 00:39:58.629 00:39:58.629 ' 00:39:58.629 20:37:53 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:39:58.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:58.629 --rc genhtml_branch_coverage=1 00:39:58.629 --rc genhtml_function_coverage=1 00:39:58.629 --rc genhtml_legend=1 00:39:58.629 --rc geninfo_all_blocks=1 00:39:58.629 --rc geninfo_unexecuted_blocks=1 00:39:58.629 00:39:58.630 ' 00:39:58.630 20:37:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:58.630 20:37:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:39:58.630 20:37:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:58.630 20:37:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:58.630 20:37:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:58.630 20:37:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.630 20:37:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.630 20:37:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.630 20:37:53 -- paths/export.sh@5 -- $ export PATH 00:39:58.630 20:37:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.630 20:37:53 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:39:58.630 20:37:53 -- common/autobuild_common.sh@479 -- $ date +%s 00:39:58.630 20:37:53 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727815073.XXXXXX 00:39:58.630 20:37:53 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727815073.FkgmLv 00:39:58.630 20:37:53 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:39:58.630 20:37:53 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:39:58.630 20:37:53 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:39:58.630 20:37:53 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:39:58.630 20:37:53 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:39:58.630 20:37:53 -- common/autobuild_common.sh@495 -- $ get_config_params 00:39:58.630 20:37:53 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:39:58.630 20:37:53 -- common/autotest_common.sh@10 -- $ set +x 00:39:58.630 20:37:53 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:39:58.630 20:37:53 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:39:58.630 20:37:53 -- pm/common@17 -- $ local monitor 00:39:58.630 20:37:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:58.630 20:37:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:58.630 20:37:53 -- pm/common@25 -- $ sleep 1 00:39:58.630 20:37:53 -- pm/common@21 -- $ date +%s 00:39:58.630 20:37:53 -- pm/common@21 -- $ date +%s 00:39:58.630 20:37:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727815073 00:39:58.630 20:37:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727815073 00:39:58.630 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727815073_collect-cpu-load.pm.log 00:39:58.630 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727815073_collect-vmstat.pm.log 00:39:59.565 20:37:54 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:39:59.565 20:37:54 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:39:59.565 20:37:54 -- spdk/autopackage.sh@14 -- $ timing_finish 00:39:59.565 20:37:54 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:59.565 20:37:54 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:59.823 20:37:54 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:59.823 20:37:54 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:59.823 20:37:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:59.823 20:37:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:59.823 20:37:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:59.823 20:37:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:39:59.823 20:37:54 -- pm/common@44 -- $ pid=80973 00:39:59.823 20:37:54 -- pm/common@50 -- $ kill -TERM 80973 00:39:59.823 20:37:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:59.823 20:37:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:39:59.823 20:37:54 -- pm/common@44 -- $ pid=80974 00:39:59.823 20:37:54 -- pm/common@50 -- $ kill -TERM 80974 00:39:59.823 + [[ -n 5027 ]] 00:39:59.823 + sudo kill 5027 00:39:59.833 [Pipeline] } 00:39:59.849 [Pipeline] // timeout 00:39:59.854 [Pipeline] } 00:39:59.868 [Pipeline] // stage 00:39:59.874 [Pipeline] } 00:39:59.888 [Pipeline] // catchError 00:39:59.897 [Pipeline] stage 00:39:59.900 [Pipeline] { (Stop VM) 00:39:59.912 [Pipeline] sh 00:40:00.190 + vagrant halt 00:40:02.718 ==> default: Halting domain... 00:40:08.081 [Pipeline] sh 00:40:08.358 + vagrant destroy -f 00:40:10.886 ==> default: Removing domain... 00:40:11.457 [Pipeline] sh 00:40:11.734 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:40:11.742 [Pipeline] } 00:40:11.755 [Pipeline] // stage 00:40:11.761 [Pipeline] } 00:40:11.778 [Pipeline] // dir 00:40:11.783 [Pipeline] } 00:40:11.796 [Pipeline] // wrap 00:40:11.802 [Pipeline] } 00:40:11.815 [Pipeline] // catchError 00:40:11.824 [Pipeline] stage 00:40:11.826 [Pipeline] { (Epilogue) 00:40:11.838 [Pipeline] sh 00:40:12.117 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:17.390 [Pipeline] catchError 00:40:17.392 [Pipeline] { 00:40:17.405 [Pipeline] sh 00:40:17.684 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:17.684 Artifacts sizes are good 00:40:17.692 [Pipeline] } 00:40:17.708 [Pipeline] // catchError 00:40:17.721 [Pipeline] archiveArtifacts 00:40:17.728 Archiving artifacts 00:40:17.879 [Pipeline] cleanWs 00:40:17.890 [WS-CLEANUP] Deleting project workspace... 00:40:17.890 [WS-CLEANUP] Deferred wipeout is used... 00:40:17.896 [WS-CLEANUP] done 00:40:17.897 [Pipeline] } 00:40:17.912 [Pipeline] // stage 00:40:17.917 [Pipeline] } 00:40:17.932 [Pipeline] // node 00:40:17.937 [Pipeline] End of Pipeline 00:40:17.977 Finished: SUCCESS