00:00:00.001 Started by upstream project "autotest-nightly" build number 4179 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3541 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.112 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.113 The recommended git tool is: git 00:00:00.113 using credential 00000000-0000-0000-0000-000000000002 00:00:00.115 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.143 Fetching changes from the remote Git repository 00:00:00.145 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.183 Using shallow fetch with depth 1 00:00:00.183 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.183 > git --version # timeout=10 00:00:00.220 > git --version # 'git version 2.39.2' 00:00:00.220 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.261 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.261 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.719 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.731 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.745 Checking out Revision 3f5fbcceba25866ebf7e22fd0e5d30548272f62c (FETCH_HEAD) 00:00:05.745 > git config core.sparsecheckout # timeout=10 00:00:05.757 > git read-tree -mu HEAD # timeout=10 00:00:05.771 > git checkout -f 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=5 00:00:05.787 Commit message: "packer: Bump java's version" 00:00:05.787 > git rev-list --no-walk 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=10 00:00:05.869 [Pipeline] Start of Pipeline 00:00:05.880 [Pipeline] library 00:00:05.882 Loading library shm_lib@master 00:00:05.882 Library shm_lib@master is cached. Copying from home. 00:00:05.897 [Pipeline] node 00:00:05.931 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.933 [Pipeline] { 00:00:05.945 [Pipeline] catchError 00:00:05.947 [Pipeline] { 00:00:05.960 [Pipeline] wrap 00:00:05.972 [Pipeline] { 00:00:05.980 [Pipeline] stage 00:00:05.982 [Pipeline] { (Prologue) 00:00:06.005 [Pipeline] echo 00:00:06.007 Node: VM-host-SM38 00:00:06.016 [Pipeline] cleanWs 00:00:06.028 [WS-CLEANUP] Deleting project workspace... 00:00:06.028 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.036 [WS-CLEANUP] done 00:00:06.224 [Pipeline] setCustomBuildProperty 00:00:06.293 [Pipeline] httpRequest 00:00:06.629 [Pipeline] echo 00:00:06.631 Sorcerer 10.211.164.101 is alive 00:00:06.640 [Pipeline] retry 00:00:06.642 [Pipeline] { 00:00:06.652 [Pipeline] httpRequest 00:00:06.657 HttpMethod: GET 00:00:06.658 URL: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:06.659 Sending request to url: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:06.660 Response Code: HTTP/1.1 200 OK 00:00:06.661 Success: Status code 200 is in the accepted range: 200,404 00:00:06.662 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:07.392 [Pipeline] } 00:00:07.407 [Pipeline] // retry 00:00:07.413 [Pipeline] sh 00:00:07.696 + tar --no-same-owner -xf jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:07.711 [Pipeline] httpRequest 00:00:08.328 [Pipeline] echo 00:00:08.330 Sorcerer 10.211.164.101 is alive 00:00:08.336 [Pipeline] retry 00:00:08.338 [Pipeline] { 00:00:08.346 [Pipeline] httpRequest 00:00:08.350 HttpMethod: GET 00:00:08.351 URL: http://10.211.164.101/packages/spdk_5a8c76d991809d2b09d0d68cf3a81951410d5bff.tar.gz 00:00:08.351 Sending request to url: http://10.211.164.101/packages/spdk_5a8c76d991809d2b09d0d68cf3a81951410d5bff.tar.gz 00:00:08.367 Response Code: HTTP/1.1 200 OK 00:00:08.368 Success: Status code 200 is in the accepted range: 200,404 00:00:08.369 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_5a8c76d991809d2b09d0d68cf3a81951410d5bff.tar.gz 00:01:20.974 [Pipeline] } 00:01:20.991 [Pipeline] // retry 00:01:20.999 [Pipeline] sh 00:01:21.287 + tar --no-same-owner -xf spdk_5a8c76d991809d2b09d0d68cf3a81951410d5bff.tar.gz 00:01:24.595 [Pipeline] sh 00:01:24.879 + git -C spdk log --oneline -n5 00:01:24.879 5a8c76d99 lib/nvmf: Add spdk_nvmf_send_discovery_log_notice API 00:01:24.879 a70c3a90b bdev/lvol: add allocated clusters num in bdev_lvol_get_lvols 00:01:24.879 c26697bf5 bdev_ut: Comparison operator and tests fixes 00:01:24.879 75a12cbf9 test: Comparison operator fixes 00:01:24.879 f999d8912 bdev_xnvme: add support for dataset management 00:01:24.900 [Pipeline] writeFile 00:01:24.915 [Pipeline] sh 00:01:25.200 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:25.213 [Pipeline] sh 00:01:25.500 + cat autorun-spdk.conf 00:01:25.500 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.500 SPDK_TEST_NVME=1 00:01:25.500 SPDK_TEST_FTL=1 00:01:25.500 SPDK_TEST_ISAL=1 00:01:25.500 SPDK_RUN_ASAN=1 00:01:25.500 SPDK_RUN_UBSAN=1 00:01:25.500 SPDK_TEST_XNVME=1 00:01:25.500 SPDK_TEST_NVME_FDP=1 00:01:25.500 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.506 RUN_NIGHTLY=1 00:01:25.508 [Pipeline] } 00:01:25.522 [Pipeline] // stage 00:01:25.537 [Pipeline] stage 00:01:25.539 [Pipeline] { (Run VM) 00:01:25.552 [Pipeline] sh 00:01:25.835 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:25.835 + echo 'Start stage prepare_nvme.sh' 00:01:25.835 Start stage prepare_nvme.sh 00:01:25.835 + [[ -n 8 ]] 00:01:25.835 + disk_prefix=ex8 00:01:25.835 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:25.835 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:25.835 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:25.835 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.835 ++ SPDK_TEST_NVME=1 00:01:25.835 ++ SPDK_TEST_FTL=1 00:01:25.835 ++ SPDK_TEST_ISAL=1 00:01:25.835 ++ SPDK_RUN_ASAN=1 00:01:25.835 ++ SPDK_RUN_UBSAN=1 00:01:25.835 ++ SPDK_TEST_XNVME=1 00:01:25.835 ++ SPDK_TEST_NVME_FDP=1 00:01:25.835 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.835 ++ RUN_NIGHTLY=1 00:01:25.835 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:25.835 + nvme_files=() 00:01:25.835 + declare -A nvme_files 00:01:25.835 + backend_dir=/var/lib/libvirt/images/backends 00:01:25.835 + nvme_files['nvme.img']=5G 00:01:25.835 + nvme_files['nvme-cmb.img']=5G 00:01:25.835 + nvme_files['nvme-multi0.img']=4G 00:01:25.835 + nvme_files['nvme-multi1.img']=4G 00:01:25.835 + nvme_files['nvme-multi2.img']=4G 00:01:25.835 + nvme_files['nvme-openstack.img']=8G 00:01:25.835 + nvme_files['nvme-zns.img']=5G 00:01:25.835 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:25.835 + (( SPDK_TEST_FTL == 1 )) 00:01:25.835 + nvme_files["nvme-ftl.img"]=6G 00:01:25.835 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:25.835 + nvme_files["nvme-fdp.img"]=1G 00:01:25.835 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:25.835 + for nvme in "${!nvme_files[@]}" 00:01:25.835 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:25.835 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.835 + for nvme in "${!nvme_files[@]}" 00:01:25.835 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:01:26.096 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:26.096 + for nvme in "${!nvme_files[@]}" 00:01:26.096 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:26.096 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.096 + for nvme in "${!nvme_files[@]}" 00:01:26.096 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:26.096 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:26.096 + for nvme in "${!nvme_files[@]}" 00:01:26.096 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:26.096 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.096 + for nvme in "${!nvme_files[@]}" 00:01:26.096 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:26.096 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.096 + for nvme in "${!nvme_files[@]}" 00:01:26.096 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:26.096 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.096 + for nvme in "${!nvme_files[@]}" 00:01:26.096 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:01:26.356 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:26.357 + for nvme in "${!nvme_files[@]}" 00:01:26.357 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:26.357 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.357 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:26.357 + echo 'End stage prepare_nvme.sh' 00:01:26.357 End stage prepare_nvme.sh 00:01:26.370 [Pipeline] sh 00:01:26.656 + DISTRO=fedora39 00:01:26.656 + CPUS=10 00:01:26.656 + RAM=12288 00:01:26.656 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:26.656 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:26.656 00:01:26.656 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:26.656 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:26.656 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:26.656 HELP=0 00:01:26.656 DRY_RUN=0 00:01:26.656 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:01:26.656 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:26.656 NVME_AUTO_CREATE=0 00:01:26.656 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:01:26.656 NVME_CMB=,,,, 00:01:26.656 NVME_PMR=,,,, 00:01:26.656 NVME_ZNS=,,,, 00:01:26.656 NVME_MS=true,,,, 00:01:26.656 NVME_FDP=,,,on, 00:01:26.656 SPDK_VAGRANT_DISTRO=fedora39 00:01:26.656 SPDK_VAGRANT_VMCPU=10 00:01:26.656 SPDK_VAGRANT_VMRAM=12288 00:01:26.656 SPDK_VAGRANT_PROVIDER=libvirt 00:01:26.656 SPDK_VAGRANT_HTTP_PROXY= 00:01:26.656 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:26.656 SPDK_OPENSTACK_NETWORK=0 00:01:26.656 VAGRANT_PACKAGE_BOX=0 00:01:26.656 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:26.656 FORCE_DISTRO=true 00:01:26.656 VAGRANT_BOX_VERSION= 00:01:26.656 EXTRA_VAGRANTFILES= 00:01:26.656 NIC_MODEL=e1000 00:01:26.656 00:01:26.656 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:26.656 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:29.204 Bringing machine 'default' up with 'libvirt' provider... 00:01:29.463 ==> default: Creating image (snapshot of base box volume). 00:01:29.723 ==> default: Creating domain with the following settings... 00:01:29.723 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728999343_3c4ec23376778fbe5eb5 00:01:29.723 ==> default: -- Domain type: kvm 00:01:29.723 ==> default: -- Cpus: 10 00:01:29.723 ==> default: -- Feature: acpi 00:01:29.723 ==> default: -- Feature: apic 00:01:29.723 ==> default: -- Feature: pae 00:01:29.723 ==> default: -- Memory: 12288M 00:01:29.723 ==> default: -- Memory Backing: hugepages: 00:01:29.723 ==> default: -- Management MAC: 00:01:29.723 ==> default: -- Loader: 00:01:29.723 ==> default: -- Nvram: 00:01:29.723 ==> default: -- Base box: spdk/fedora39 00:01:29.723 ==> default: -- Storage pool: default 00:01:29.723 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728999343_3c4ec23376778fbe5eb5.img (20G) 00:01:29.723 ==> default: -- Volume Cache: default 00:01:29.723 ==> default: -- Kernel: 00:01:29.723 ==> default: -- Initrd: 00:01:29.723 ==> default: -- Graphics Type: vnc 00:01:29.723 ==> default: -- Graphics Port: -1 00:01:29.723 ==> default: -- Graphics IP: 127.0.0.1 00:01:29.723 ==> default: -- Graphics Password: Not defined 00:01:29.723 ==> default: -- Video Type: cirrus 00:01:29.723 ==> default: -- Video VRAM: 9216 00:01:29.723 ==> default: -- Sound Type: 00:01:29.723 ==> default: -- Keymap: en-us 00:01:29.723 ==> default: -- TPM Path: 00:01:29.723 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:29.723 ==> default: -- Command line args: 00:01:29.723 ==> default: -> value=-device, 00:01:29.723 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:29.723 ==> default: -> value=-drive, 00:01:29.723 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:29.723 ==> default: -> value=-device, 00:01:29.723 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:29.723 ==> default: -> value=-device, 00:01:29.723 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:29.723 ==> default: -> value=-drive, 00:01:29.723 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:01:29.723 ==> default: -> value=-device, 00:01:29.723 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.723 ==> default: -> value=-device, 00:01:29.723 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:29.723 ==> default: -> value=-drive, 00:01:29.723 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:29.723 ==> default: -> value=-device, 00:01:29.723 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.723 ==> default: -> value=-drive, 00:01:29.723 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:29.723 ==> default: -> value=-device, 00:01:29.723 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.723 ==> default: -> value=-drive, 00:01:29.723 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:29.723 ==> default: -> value=-device, 00:01:29.723 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.723 ==> default: -> value=-device, 00:01:29.723 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:29.723 ==> default: -> value=-device, 00:01:29.723 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:29.723 ==> default: -> value=-drive, 00:01:29.723 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:29.723 ==> default: -> value=-device, 00:01:29.724 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.985 ==> default: Creating shared folders metadata... 00:01:29.985 ==> default: Starting domain. 00:01:31.371 ==> default: Waiting for domain to get an IP address... 00:01:49.521 ==> default: Waiting for SSH to become available... 00:01:49.521 ==> default: Configuring and enabling network interfaces... 00:01:53.732 default: SSH address: 192.168.121.70:22 00:01:53.732 default: SSH username: vagrant 00:01:53.732 default: SSH auth method: private key 00:01:55.119 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:05.145 ==> default: Mounting SSHFS shared folder... 00:02:05.406 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:05.406 ==> default: Checking Mount.. 00:02:06.787 ==> default: Folder Successfully Mounted! 00:02:06.787 00:02:06.787 SUCCESS! 00:02:06.787 00:02:06.787 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:06.787 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:06.788 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:06.788 00:02:06.798 [Pipeline] } 00:02:06.814 [Pipeline] // stage 00:02:06.824 [Pipeline] dir 00:02:06.825 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:06.826 [Pipeline] { 00:02:06.840 [Pipeline] catchError 00:02:06.842 [Pipeline] { 00:02:06.856 [Pipeline] sh 00:02:07.139 + vagrant ssh-config --host vagrant 00:02:07.139 + sed -ne '/^Host/,$p' 00:02:07.139 + tee ssh_conf 00:02:09.679 Host vagrant 00:02:09.679 HostName 192.168.121.70 00:02:09.679 User vagrant 00:02:09.679 Port 22 00:02:09.679 UserKnownHostsFile /dev/null 00:02:09.679 StrictHostKeyChecking no 00:02:09.679 PasswordAuthentication no 00:02:09.679 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:09.679 IdentitiesOnly yes 00:02:09.679 LogLevel FATAL 00:02:09.679 ForwardAgent yes 00:02:09.679 ForwardX11 yes 00:02:09.679 00:02:09.693 [Pipeline] withEnv 00:02:09.695 [Pipeline] { 00:02:09.709 [Pipeline] sh 00:02:10.029 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:10.029 source /etc/os-release 00:02:10.029 [[ -e /image.version ]] && img=$(< /image.version) 00:02:10.029 # Minimal, systemd-like check. 00:02:10.029 if [[ -e /.dockerenv ]]; then 00:02:10.029 # Clear garbage from the node'\''s name: 00:02:10.029 # agt-er_autotest_547-896 -> autotest_547-896 00:02:10.029 # $HOSTNAME is the actual container id 00:02:10.029 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:10.029 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:10.029 # We can assume this is a mount from a host where container is running, 00:02:10.029 # so fetch its hostname to easily identify the target swarm worker. 00:02:10.029 container="$(< /etc/hostname) ($agent)" 00:02:10.029 else 00:02:10.029 # Fallback 00:02:10.029 container=$agent 00:02:10.029 fi 00:02:10.029 fi 00:02:10.029 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:10.029 ' 00:02:10.043 [Pipeline] } 00:02:10.059 [Pipeline] // withEnv 00:02:10.067 [Pipeline] setCustomBuildProperty 00:02:10.083 [Pipeline] stage 00:02:10.086 [Pipeline] { (Tests) 00:02:10.104 [Pipeline] sh 00:02:10.388 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:10.663 [Pipeline] sh 00:02:10.962 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:11.238 [Pipeline] timeout 00:02:11.238 Timeout set to expire in 50 min 00:02:11.240 [Pipeline] { 00:02:11.254 [Pipeline] sh 00:02:11.537 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:12.107 HEAD is now at 5a8c76d99 lib/nvmf: Add spdk_nvmf_send_discovery_log_notice API 00:02:12.119 [Pipeline] sh 00:02:12.404 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:12.677 [Pipeline] sh 00:02:12.958 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:12.976 [Pipeline] sh 00:02:13.268 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:13.268 ++ readlink -f spdk_repo 00:02:13.268 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:13.268 + [[ -n /home/vagrant/spdk_repo ]] 00:02:13.268 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:13.268 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:13.268 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:13.268 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:13.268 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:13.268 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:13.268 + cd /home/vagrant/spdk_repo 00:02:13.268 + source /etc/os-release 00:02:13.268 ++ NAME='Fedora Linux' 00:02:13.268 ++ VERSION='39 (Cloud Edition)' 00:02:13.268 ++ ID=fedora 00:02:13.268 ++ VERSION_ID=39 00:02:13.268 ++ VERSION_CODENAME= 00:02:13.268 ++ PLATFORM_ID=platform:f39 00:02:13.268 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:13.268 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:13.268 ++ LOGO=fedora-logo-icon 00:02:13.268 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:13.268 ++ HOME_URL=https://fedoraproject.org/ 00:02:13.268 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:13.268 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:13.268 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:13.268 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:13.268 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:13.268 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:13.268 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:13.268 ++ SUPPORT_END=2024-11-12 00:02:13.268 ++ VARIANT='Cloud Edition' 00:02:13.268 ++ VARIANT_ID=cloud 00:02:13.268 + uname -a 00:02:13.268 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:13.268 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:13.854 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:14.115 Hugepages 00:02:14.115 node hugesize free / total 00:02:14.115 node0 1048576kB 0 / 0 00:02:14.115 node0 2048kB 0 / 0 00:02:14.115 00:02:14.115 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:14.115 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:14.115 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:14.115 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:14.115 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:14.115 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:14.115 + rm -f /tmp/spdk-ld-path 00:02:14.115 + source autorun-spdk.conf 00:02:14.115 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.115 ++ SPDK_TEST_NVME=1 00:02:14.115 ++ SPDK_TEST_FTL=1 00:02:14.115 ++ SPDK_TEST_ISAL=1 00:02:14.115 ++ SPDK_RUN_ASAN=1 00:02:14.115 ++ SPDK_RUN_UBSAN=1 00:02:14.115 ++ SPDK_TEST_XNVME=1 00:02:14.115 ++ SPDK_TEST_NVME_FDP=1 00:02:14.115 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.115 ++ RUN_NIGHTLY=1 00:02:14.115 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:14.115 + [[ -n '' ]] 00:02:14.115 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:14.115 + for M in /var/spdk/build-*-manifest.txt 00:02:14.115 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:14.115 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.115 + for M in /var/spdk/build-*-manifest.txt 00:02:14.115 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:14.115 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.115 + for M in /var/spdk/build-*-manifest.txt 00:02:14.115 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:14.115 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.115 ++ uname 00:02:14.115 + [[ Linux == \L\i\n\u\x ]] 00:02:14.115 + sudo dmesg -T 00:02:14.391 + sudo dmesg --clear 00:02:14.391 + dmesg_pid=5043 00:02:14.391 + [[ Fedora Linux == FreeBSD ]] 00:02:14.391 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.391 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.391 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:14.391 + [[ -x /usr/src/fio-static/fio ]] 00:02:14.391 + sudo dmesg -Tw 00:02:14.391 + export FIO_BIN=/usr/src/fio-static/fio 00:02:14.391 + FIO_BIN=/usr/src/fio-static/fio 00:02:14.391 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:14.391 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:14.391 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:14.391 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.391 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.391 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:14.391 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.391 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.391 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:14.391 Test configuration: 00:02:14.391 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.391 SPDK_TEST_NVME=1 00:02:14.391 SPDK_TEST_FTL=1 00:02:14.391 SPDK_TEST_ISAL=1 00:02:14.391 SPDK_RUN_ASAN=1 00:02:14.391 SPDK_RUN_UBSAN=1 00:02:14.391 SPDK_TEST_XNVME=1 00:02:14.391 SPDK_TEST_NVME_FDP=1 00:02:14.391 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.392 RUN_NIGHTLY=1 13:36:28 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:14.392 13:36:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:14.392 13:36:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:14.392 13:36:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:14.392 13:36:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:14.392 13:36:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:14.392 13:36:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.392 13:36:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.392 13:36:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.392 13:36:28 -- paths/export.sh@5 -- $ export PATH 00:02:14.392 13:36:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.392 13:36:28 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:14.392 13:36:28 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:14.392 13:36:28 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728999388.XXXXXX 00:02:14.392 13:36:28 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728999388.cZa12n 00:02:14.392 13:36:28 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:14.392 13:36:28 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:14.392 13:36:28 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:14.392 13:36:28 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:14.392 13:36:28 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:14.392 13:36:28 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:14.392 13:36:28 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:14.392 13:36:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.392 13:36:28 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:14.393 13:36:28 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:14.393 13:36:28 -- pm/common@17 -- $ local monitor 00:02:14.393 13:36:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.393 13:36:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.393 13:36:28 -- pm/common@25 -- $ sleep 1 00:02:14.393 13:36:28 -- pm/common@21 -- $ date +%s 00:02:14.393 13:36:28 -- pm/common@21 -- $ date +%s 00:02:14.393 13:36:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728999388 00:02:14.393 13:36:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728999388 00:02:14.393 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728999388_collect-cpu-load.pm.log 00:02:14.393 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728999388_collect-vmstat.pm.log 00:02:15.338 13:36:29 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:15.338 13:36:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:15.338 13:36:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:15.338 13:36:29 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:15.338 13:36:29 -- spdk/autobuild.sh@16 -- $ date -u 00:02:15.338 Tue Oct 15 01:36:29 PM UTC 2024 00:02:15.339 13:36:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:15.339 v25.01-pre-70-g5a8c76d99 00:02:15.339 13:36:29 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:15.339 13:36:29 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:15.339 13:36:29 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:15.339 13:36:29 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:15.339 13:36:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.339 ************************************ 00:02:15.339 START TEST asan 00:02:15.339 ************************************ 00:02:15.339 using asan 00:02:15.339 ************************************ 00:02:15.339 END TEST asan 00:02:15.339 ************************************ 00:02:15.339 13:36:29 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:15.339 00:02:15.339 real 0m0.000s 00:02:15.339 user 0m0.000s 00:02:15.339 sys 0m0.000s 00:02:15.339 13:36:29 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:15.339 13:36:29 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:15.601 13:36:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:15.601 13:36:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:15.601 13:36:29 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:15.601 13:36:29 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:15.601 13:36:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.601 ************************************ 00:02:15.601 START TEST ubsan 00:02:15.601 ************************************ 00:02:15.601 using ubsan 00:02:15.601 13:36:29 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:15.601 00:02:15.601 real 0m0.000s 00:02:15.601 user 0m0.000s 00:02:15.601 sys 0m0.000s 00:02:15.601 13:36:29 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:15.601 ************************************ 00:02:15.601 END TEST ubsan 00:02:15.601 ************************************ 00:02:15.601 13:36:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:15.601 13:36:29 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:15.601 13:36:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:15.601 13:36:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:15.601 13:36:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:15.601 13:36:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:15.601 13:36:29 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:15.601 13:36:29 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:15.601 13:36:29 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:15.601 13:36:29 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:15.601 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:15.601 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:16.174 Using 'verbs' RDMA provider 00:02:29.349 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:39.356 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:39.356 Creating mk/config.mk...done. 00:02:39.356 Creating mk/cc.flags.mk...done. 00:02:39.356 Type 'make' to build. 00:02:39.356 13:36:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:39.356 13:36:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:39.356 13:36:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:39.356 13:36:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.356 ************************************ 00:02:39.356 START TEST make 00:02:39.356 ************************************ 00:02:39.356 13:36:52 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:39.356 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:39.356 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:39.356 meson setup builddir \ 00:02:39.356 -Dwith-libaio=enabled \ 00:02:39.356 -Dwith-liburing=enabled \ 00:02:39.356 -Dwith-libvfn=disabled \ 00:02:39.356 -Dwith-spdk=disabled \ 00:02:39.356 -Dexamples=false \ 00:02:39.356 -Dtests=false \ 00:02:39.356 -Dtools=false && \ 00:02:39.356 meson compile -C builddir && \ 00:02:39.356 cd -) 00:02:39.356 make[1]: Nothing to be done for 'all'. 00:02:41.271 The Meson build system 00:02:41.271 Version: 1.5.0 00:02:41.271 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:41.271 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:41.271 Build type: native build 00:02:41.271 Project name: xnvme 00:02:41.271 Project version: 0.7.5 00:02:41.271 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:41.271 C linker for the host machine: cc ld.bfd 2.40-14 00:02:41.271 Host machine cpu family: x86_64 00:02:41.271 Host machine cpu: x86_64 00:02:41.271 Message: host_machine.system: linux 00:02:41.271 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:41.271 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:41.271 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:41.271 Run-time dependency threads found: YES 00:02:41.271 Has header "setupapi.h" : NO 00:02:41.271 Has header "linux/blkzoned.h" : YES 00:02:41.271 Has header "linux/blkzoned.h" : YES (cached) 00:02:41.271 Has header "libaio.h" : YES 00:02:41.271 Library aio found: YES 00:02:41.271 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:41.271 Run-time dependency liburing found: YES 2.2 00:02:41.271 Dependency libvfn skipped: feature with-libvfn disabled 00:02:41.271 Found CMake: /usr/bin/cmake (3.27.7) 00:02:41.271 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:41.271 Subproject spdk : skipped: feature with-spdk disabled 00:02:41.271 Run-time dependency appleframeworks found: NO (tried framework) 00:02:41.271 Run-time dependency appleframeworks found: NO (tried framework) 00:02:41.271 Library rt found: YES 00:02:41.271 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:41.271 Configuring xnvme_config.h using configuration 00:02:41.271 Configuring xnvme.spec using configuration 00:02:41.271 Run-time dependency bash-completion found: YES 2.11 00:02:41.271 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:41.271 Program cp found: YES (/usr/bin/cp) 00:02:41.271 Build targets in project: 3 00:02:41.271 00:02:41.271 xnvme 0.7.5 00:02:41.271 00:02:41.271 Subprojects 00:02:41.271 spdk : NO Feature 'with-spdk' disabled 00:02:41.271 00:02:41.271 User defined options 00:02:41.271 examples : false 00:02:41.271 tests : false 00:02:41.271 tools : false 00:02:41.271 with-libaio : enabled 00:02:41.271 with-liburing: enabled 00:02:41.271 with-libvfn : disabled 00:02:41.271 with-spdk : disabled 00:02:41.271 00:02:41.272 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.842 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:41.842 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:41.842 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:41.842 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:41.842 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:41.842 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:41.842 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:41.842 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:41.842 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:41.842 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:41.842 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:41.842 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:41.842 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:41.842 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:41.842 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:42.103 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:42.103 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:42.103 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:42.103 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:42.103 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:42.103 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:42.103 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:42.103 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:42.103 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:42.103 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:42.103 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:42.103 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:42.103 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:42.103 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:42.103 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:42.103 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:42.103 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:42.103 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:42.103 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:42.103 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:42.103 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:42.103 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:42.103 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:42.103 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:42.103 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:42.103 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:42.103 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:42.103 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:42.103 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:42.103 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:42.103 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:42.103 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:42.103 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:42.103 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:42.103 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:42.103 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:42.103 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:42.103 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:42.363 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:42.363 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:42.363 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:42.363 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:42.363 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:42.363 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:42.363 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:42.363 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:42.363 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:42.363 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:42.363 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:42.363 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:42.363 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:42.363 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:42.363 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:42.363 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:42.363 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:42.363 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:42.363 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:42.624 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:42.624 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:42.886 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:42.886 [75/76] Linking static target lib/libxnvme.a 00:02:42.886 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:42.886 INFO: autodetecting backend as ninja 00:02:42.886 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:42.886 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:49.473 The Meson build system 00:02:49.473 Version: 1.5.0 00:02:49.473 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:49.473 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:49.473 Build type: native build 00:02:49.473 Program cat found: YES (/usr/bin/cat) 00:02:49.473 Project name: DPDK 00:02:49.473 Project version: 24.03.0 00:02:49.473 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:49.473 C linker for the host machine: cc ld.bfd 2.40-14 00:02:49.473 Host machine cpu family: x86_64 00:02:49.473 Host machine cpu: x86_64 00:02:49.473 Message: ## Building in Developer Mode ## 00:02:49.473 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:49.473 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:49.473 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:49.473 Program python3 found: YES (/usr/bin/python3) 00:02:49.473 Program cat found: YES (/usr/bin/cat) 00:02:49.473 Compiler for C supports arguments -march=native: YES 00:02:49.473 Checking for size of "void *" : 8 00:02:49.473 Checking for size of "void *" : 8 (cached) 00:02:49.473 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:49.473 Library m found: YES 00:02:49.473 Library numa found: YES 00:02:49.473 Has header "numaif.h" : YES 00:02:49.473 Library fdt found: NO 00:02:49.473 Library execinfo found: NO 00:02:49.473 Has header "execinfo.h" : YES 00:02:49.473 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:49.473 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:49.473 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:49.473 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:49.473 Run-time dependency openssl found: YES 3.1.1 00:02:49.473 Run-time dependency libpcap found: YES 1.10.4 00:02:49.473 Has header "pcap.h" with dependency libpcap: YES 00:02:49.473 Compiler for C supports arguments -Wcast-qual: YES 00:02:49.473 Compiler for C supports arguments -Wdeprecated: YES 00:02:49.473 Compiler for C supports arguments -Wformat: YES 00:02:49.473 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:49.473 Compiler for C supports arguments -Wformat-security: NO 00:02:49.473 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:49.473 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:49.473 Compiler for C supports arguments -Wnested-externs: YES 00:02:49.473 Compiler for C supports arguments -Wold-style-definition: YES 00:02:49.473 Compiler for C supports arguments -Wpointer-arith: YES 00:02:49.473 Compiler for C supports arguments -Wsign-compare: YES 00:02:49.473 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:49.473 Compiler for C supports arguments -Wundef: YES 00:02:49.473 Compiler for C supports arguments -Wwrite-strings: YES 00:02:49.473 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:49.473 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:49.473 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:49.473 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:49.473 Program objdump found: YES (/usr/bin/objdump) 00:02:49.473 Compiler for C supports arguments -mavx512f: YES 00:02:49.473 Checking if "AVX512 checking" compiles: YES 00:02:49.473 Fetching value of define "__SSE4_2__" : 1 00:02:49.473 Fetching value of define "__AES__" : 1 00:02:49.473 Fetching value of define "__AVX__" : 1 00:02:49.473 Fetching value of define "__AVX2__" : 1 00:02:49.473 Fetching value of define "__AVX512BW__" : 1 00:02:49.473 Fetching value of define "__AVX512CD__" : 1 00:02:49.473 Fetching value of define "__AVX512DQ__" : 1 00:02:49.473 Fetching value of define "__AVX512F__" : 1 00:02:49.473 Fetching value of define "__AVX512VL__" : 1 00:02:49.473 Fetching value of define "__PCLMUL__" : 1 00:02:49.473 Fetching value of define "__RDRND__" : 1 00:02:49.473 Fetching value of define "__RDSEED__" : 1 00:02:49.473 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:49.473 Fetching value of define "__znver1__" : (undefined) 00:02:49.473 Fetching value of define "__znver2__" : (undefined) 00:02:49.473 Fetching value of define "__znver3__" : (undefined) 00:02:49.473 Fetching value of define "__znver4__" : (undefined) 00:02:49.473 Library asan found: YES 00:02:49.473 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:49.473 Message: lib/log: Defining dependency "log" 00:02:49.473 Message: lib/kvargs: Defining dependency "kvargs" 00:02:49.473 Message: lib/telemetry: Defining dependency "telemetry" 00:02:49.473 Library rt found: YES 00:02:49.473 Checking for function "getentropy" : NO 00:02:49.473 Message: lib/eal: Defining dependency "eal" 00:02:49.473 Message: lib/ring: Defining dependency "ring" 00:02:49.473 Message: lib/rcu: Defining dependency "rcu" 00:02:49.473 Message: lib/mempool: Defining dependency "mempool" 00:02:49.473 Message: lib/mbuf: Defining dependency "mbuf" 00:02:49.473 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:49.473 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:49.473 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:49.473 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:49.473 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:49.473 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:49.473 Compiler for C supports arguments -mpclmul: YES 00:02:49.473 Compiler for C supports arguments -maes: YES 00:02:49.473 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:49.473 Compiler for C supports arguments -mavx512bw: YES 00:02:49.473 Compiler for C supports arguments -mavx512dq: YES 00:02:49.473 Compiler for C supports arguments -mavx512vl: YES 00:02:49.473 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:49.473 Compiler for C supports arguments -mavx2: YES 00:02:49.473 Compiler for C supports arguments -mavx: YES 00:02:49.473 Message: lib/net: Defining dependency "net" 00:02:49.473 Message: lib/meter: Defining dependency "meter" 00:02:49.473 Message: lib/ethdev: Defining dependency "ethdev" 00:02:49.473 Message: lib/pci: Defining dependency "pci" 00:02:49.473 Message: lib/cmdline: Defining dependency "cmdline" 00:02:49.473 Message: lib/hash: Defining dependency "hash" 00:02:49.473 Message: lib/timer: Defining dependency "timer" 00:02:49.473 Message: lib/compressdev: Defining dependency "compressdev" 00:02:49.473 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:49.473 Message: lib/dmadev: Defining dependency "dmadev" 00:02:49.473 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:49.473 Message: lib/power: Defining dependency "power" 00:02:49.473 Message: lib/reorder: Defining dependency "reorder" 00:02:49.473 Message: lib/security: Defining dependency "security" 00:02:49.473 Has header "linux/userfaultfd.h" : YES 00:02:49.473 Has header "linux/vduse.h" : YES 00:02:49.473 Message: lib/vhost: Defining dependency "vhost" 00:02:49.473 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:49.473 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:49.473 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:49.473 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:49.473 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:49.473 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:49.473 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:49.473 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:49.473 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:49.473 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:49.473 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:49.473 Configuring doxy-api-html.conf using configuration 00:02:49.473 Configuring doxy-api-man.conf using configuration 00:02:49.473 Program mandb found: YES (/usr/bin/mandb) 00:02:49.473 Program sphinx-build found: NO 00:02:49.473 Configuring rte_build_config.h using configuration 00:02:49.473 Message: 00:02:49.473 ================= 00:02:49.473 Applications Enabled 00:02:49.473 ================= 00:02:49.473 00:02:49.473 apps: 00:02:49.473 00:02:49.473 00:02:49.473 Message: 00:02:49.473 ================= 00:02:49.473 Libraries Enabled 00:02:49.473 ================= 00:02:49.473 00:02:49.473 libs: 00:02:49.473 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:49.473 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:49.473 cryptodev, dmadev, power, reorder, security, vhost, 00:02:49.473 00:02:49.473 Message: 00:02:49.473 =============== 00:02:49.473 Drivers Enabled 00:02:49.473 =============== 00:02:49.473 00:02:49.473 common: 00:02:49.473 00:02:49.473 bus: 00:02:49.473 pci, vdev, 00:02:49.473 mempool: 00:02:49.473 ring, 00:02:49.473 dma: 00:02:49.473 00:02:49.473 net: 00:02:49.473 00:02:49.473 crypto: 00:02:49.473 00:02:49.473 compress: 00:02:49.473 00:02:49.473 vdpa: 00:02:49.473 00:02:49.473 00:02:49.473 Message: 00:02:49.473 ================= 00:02:49.473 Content Skipped 00:02:49.473 ================= 00:02:49.473 00:02:49.473 apps: 00:02:49.473 dumpcap: explicitly disabled via build config 00:02:49.473 graph: explicitly disabled via build config 00:02:49.473 pdump: explicitly disabled via build config 00:02:49.473 proc-info: explicitly disabled via build config 00:02:49.473 test-acl: explicitly disabled via build config 00:02:49.473 test-bbdev: explicitly disabled via build config 00:02:49.473 test-cmdline: explicitly disabled via build config 00:02:49.473 test-compress-perf: explicitly disabled via build config 00:02:49.473 test-crypto-perf: explicitly disabled via build config 00:02:49.473 test-dma-perf: explicitly disabled via build config 00:02:49.473 test-eventdev: explicitly disabled via build config 00:02:49.474 test-fib: explicitly disabled via build config 00:02:49.474 test-flow-perf: explicitly disabled via build config 00:02:49.474 test-gpudev: explicitly disabled via build config 00:02:49.474 test-mldev: explicitly disabled via build config 00:02:49.474 test-pipeline: explicitly disabled via build config 00:02:49.474 test-pmd: explicitly disabled via build config 00:02:49.474 test-regex: explicitly disabled via build config 00:02:49.474 test-sad: explicitly disabled via build config 00:02:49.474 test-security-perf: explicitly disabled via build config 00:02:49.474 00:02:49.474 libs: 00:02:49.474 argparse: explicitly disabled via build config 00:02:49.474 metrics: explicitly disabled via build config 00:02:49.474 acl: explicitly disabled via build config 00:02:49.474 bbdev: explicitly disabled via build config 00:02:49.474 bitratestats: explicitly disabled via build config 00:02:49.474 bpf: explicitly disabled via build config 00:02:49.474 cfgfile: explicitly disabled via build config 00:02:49.474 distributor: explicitly disabled via build config 00:02:49.474 efd: explicitly disabled via build config 00:02:49.474 eventdev: explicitly disabled via build config 00:02:49.474 dispatcher: explicitly disabled via build config 00:02:49.474 gpudev: explicitly disabled via build config 00:02:49.474 gro: explicitly disabled via build config 00:02:49.474 gso: explicitly disabled via build config 00:02:49.474 ip_frag: explicitly disabled via build config 00:02:49.474 jobstats: explicitly disabled via build config 00:02:49.474 latencystats: explicitly disabled via build config 00:02:49.474 lpm: explicitly disabled via build config 00:02:49.474 member: explicitly disabled via build config 00:02:49.474 pcapng: explicitly disabled via build config 00:02:49.474 rawdev: explicitly disabled via build config 00:02:49.474 regexdev: explicitly disabled via build config 00:02:49.474 mldev: explicitly disabled via build config 00:02:49.474 rib: explicitly disabled via build config 00:02:49.474 sched: explicitly disabled via build config 00:02:49.474 stack: explicitly disabled via build config 00:02:49.474 ipsec: explicitly disabled via build config 00:02:49.474 pdcp: explicitly disabled via build config 00:02:49.474 fib: explicitly disabled via build config 00:02:49.474 port: explicitly disabled via build config 00:02:49.474 pdump: explicitly disabled via build config 00:02:49.474 table: explicitly disabled via build config 00:02:49.474 pipeline: explicitly disabled via build config 00:02:49.474 graph: explicitly disabled via build config 00:02:49.474 node: explicitly disabled via build config 00:02:49.474 00:02:49.474 drivers: 00:02:49.474 common/cpt: not in enabled drivers build config 00:02:49.474 common/dpaax: not in enabled drivers build config 00:02:49.474 common/iavf: not in enabled drivers build config 00:02:49.474 common/idpf: not in enabled drivers build config 00:02:49.474 common/ionic: not in enabled drivers build config 00:02:49.474 common/mvep: not in enabled drivers build config 00:02:49.474 common/octeontx: not in enabled drivers build config 00:02:49.474 bus/auxiliary: not in enabled drivers build config 00:02:49.474 bus/cdx: not in enabled drivers build config 00:02:49.474 bus/dpaa: not in enabled drivers build config 00:02:49.474 bus/fslmc: not in enabled drivers build config 00:02:49.474 bus/ifpga: not in enabled drivers build config 00:02:49.474 bus/platform: not in enabled drivers build config 00:02:49.474 bus/uacce: not in enabled drivers build config 00:02:49.474 bus/vmbus: not in enabled drivers build config 00:02:49.474 common/cnxk: not in enabled drivers build config 00:02:49.474 common/mlx5: not in enabled drivers build config 00:02:49.474 common/nfp: not in enabled drivers build config 00:02:49.474 common/nitrox: not in enabled drivers build config 00:02:49.474 common/qat: not in enabled drivers build config 00:02:49.474 common/sfc_efx: not in enabled drivers build config 00:02:49.474 mempool/bucket: not in enabled drivers build config 00:02:49.474 mempool/cnxk: not in enabled drivers build config 00:02:49.474 mempool/dpaa: not in enabled drivers build config 00:02:49.474 mempool/dpaa2: not in enabled drivers build config 00:02:49.474 mempool/octeontx: not in enabled drivers build config 00:02:49.474 mempool/stack: not in enabled drivers build config 00:02:49.474 dma/cnxk: not in enabled drivers build config 00:02:49.474 dma/dpaa: not in enabled drivers build config 00:02:49.474 dma/dpaa2: not in enabled drivers build config 00:02:49.474 dma/hisilicon: not in enabled drivers build config 00:02:49.474 dma/idxd: not in enabled drivers build config 00:02:49.474 dma/ioat: not in enabled drivers build config 00:02:49.474 dma/skeleton: not in enabled drivers build config 00:02:49.474 net/af_packet: not in enabled drivers build config 00:02:49.474 net/af_xdp: not in enabled drivers build config 00:02:49.474 net/ark: not in enabled drivers build config 00:02:49.474 net/atlantic: not in enabled drivers build config 00:02:49.474 net/avp: not in enabled drivers build config 00:02:49.474 net/axgbe: not in enabled drivers build config 00:02:49.474 net/bnx2x: not in enabled drivers build config 00:02:49.474 net/bnxt: not in enabled drivers build config 00:02:49.474 net/bonding: not in enabled drivers build config 00:02:49.474 net/cnxk: not in enabled drivers build config 00:02:49.474 net/cpfl: not in enabled drivers build config 00:02:49.474 net/cxgbe: not in enabled drivers build config 00:02:49.474 net/dpaa: not in enabled drivers build config 00:02:49.474 net/dpaa2: not in enabled drivers build config 00:02:49.474 net/e1000: not in enabled drivers build config 00:02:49.474 net/ena: not in enabled drivers build config 00:02:49.474 net/enetc: not in enabled drivers build config 00:02:49.474 net/enetfec: not in enabled drivers build config 00:02:49.474 net/enic: not in enabled drivers build config 00:02:49.474 net/failsafe: not in enabled drivers build config 00:02:49.474 net/fm10k: not in enabled drivers build config 00:02:49.474 net/gve: not in enabled drivers build config 00:02:49.474 net/hinic: not in enabled drivers build config 00:02:49.474 net/hns3: not in enabled drivers build config 00:02:49.474 net/i40e: not in enabled drivers build config 00:02:49.474 net/iavf: not in enabled drivers build config 00:02:49.474 net/ice: not in enabled drivers build config 00:02:49.474 net/idpf: not in enabled drivers build config 00:02:49.474 net/igc: not in enabled drivers build config 00:02:49.474 net/ionic: not in enabled drivers build config 00:02:49.474 net/ipn3ke: not in enabled drivers build config 00:02:49.474 net/ixgbe: not in enabled drivers build config 00:02:49.474 net/mana: not in enabled drivers build config 00:02:49.474 net/memif: not in enabled drivers build config 00:02:49.474 net/mlx4: not in enabled drivers build config 00:02:49.474 net/mlx5: not in enabled drivers build config 00:02:49.474 net/mvneta: not in enabled drivers build config 00:02:49.474 net/mvpp2: not in enabled drivers build config 00:02:49.474 net/netvsc: not in enabled drivers build config 00:02:49.474 net/nfb: not in enabled drivers build config 00:02:49.474 net/nfp: not in enabled drivers build config 00:02:49.474 net/ngbe: not in enabled drivers build config 00:02:49.474 net/null: not in enabled drivers build config 00:02:49.474 net/octeontx: not in enabled drivers build config 00:02:49.474 net/octeon_ep: not in enabled drivers build config 00:02:49.474 net/pcap: not in enabled drivers build config 00:02:49.474 net/pfe: not in enabled drivers build config 00:02:49.474 net/qede: not in enabled drivers build config 00:02:49.474 net/ring: not in enabled drivers build config 00:02:49.474 net/sfc: not in enabled drivers build config 00:02:49.474 net/softnic: not in enabled drivers build config 00:02:49.474 net/tap: not in enabled drivers build config 00:02:49.474 net/thunderx: not in enabled drivers build config 00:02:49.474 net/txgbe: not in enabled drivers build config 00:02:49.474 net/vdev_netvsc: not in enabled drivers build config 00:02:49.474 net/vhost: not in enabled drivers build config 00:02:49.474 net/virtio: not in enabled drivers build config 00:02:49.474 net/vmxnet3: not in enabled drivers build config 00:02:49.474 raw/*: missing internal dependency, "rawdev" 00:02:49.474 crypto/armv8: not in enabled drivers build config 00:02:49.474 crypto/bcmfs: not in enabled drivers build config 00:02:49.474 crypto/caam_jr: not in enabled drivers build config 00:02:49.474 crypto/ccp: not in enabled drivers build config 00:02:49.474 crypto/cnxk: not in enabled drivers build config 00:02:49.474 crypto/dpaa_sec: not in enabled drivers build config 00:02:49.474 crypto/dpaa2_sec: not in enabled drivers build config 00:02:49.474 crypto/ipsec_mb: not in enabled drivers build config 00:02:49.474 crypto/mlx5: not in enabled drivers build config 00:02:49.474 crypto/mvsam: not in enabled drivers build config 00:02:49.474 crypto/nitrox: not in enabled drivers build config 00:02:49.474 crypto/null: not in enabled drivers build config 00:02:49.474 crypto/octeontx: not in enabled drivers build config 00:02:49.474 crypto/openssl: not in enabled drivers build config 00:02:49.474 crypto/scheduler: not in enabled drivers build config 00:02:49.474 crypto/uadk: not in enabled drivers build config 00:02:49.474 crypto/virtio: not in enabled drivers build config 00:02:49.474 compress/isal: not in enabled drivers build config 00:02:49.474 compress/mlx5: not in enabled drivers build config 00:02:49.474 compress/nitrox: not in enabled drivers build config 00:02:49.474 compress/octeontx: not in enabled drivers build config 00:02:49.474 compress/zlib: not in enabled drivers build config 00:02:49.474 regex/*: missing internal dependency, "regexdev" 00:02:49.474 ml/*: missing internal dependency, "mldev" 00:02:49.474 vdpa/ifc: not in enabled drivers build config 00:02:49.474 vdpa/mlx5: not in enabled drivers build config 00:02:49.474 vdpa/nfp: not in enabled drivers build config 00:02:49.474 vdpa/sfc: not in enabled drivers build config 00:02:49.474 event/*: missing internal dependency, "eventdev" 00:02:49.474 baseband/*: missing internal dependency, "bbdev" 00:02:49.474 gpu/*: missing internal dependency, "gpudev" 00:02:49.474 00:02:49.474 00:02:49.474 Build targets in project: 84 00:02:49.474 00:02:49.474 DPDK 24.03.0 00:02:49.474 00:02:49.474 User defined options 00:02:49.474 buildtype : debug 00:02:49.474 default_library : shared 00:02:49.474 libdir : lib 00:02:49.474 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:49.474 b_sanitize : address 00:02:49.474 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:49.474 c_link_args : 00:02:49.474 cpu_instruction_set: native 00:02:49.474 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:49.474 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:49.474 enable_docs : false 00:02:49.474 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:49.474 enable_kmods : false 00:02:49.474 max_lcores : 128 00:02:49.474 tests : false 00:02:49.474 00:02:49.474 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:49.474 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:49.474 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:49.474 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:49.474 [3/267] Linking static target lib/librte_kvargs.a 00:02:49.474 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:49.474 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:49.474 [6/267] Linking static target lib/librte_log.a 00:02:49.735 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:49.735 [8/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:49.735 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:49.735 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:49.735 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:49.735 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:49.735 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:49.735 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.996 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:49.996 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:49.996 [17/267] Linking static target lib/librte_telemetry.a 00:02:49.996 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:50.256 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:50.256 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:50.256 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:50.256 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:50.256 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:50.256 [24/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.256 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:50.256 [26/267] Linking target lib/librte_log.so.24.1 00:02:50.256 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:50.518 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:50.518 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:50.518 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:50.518 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:50.518 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:50.518 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.779 [34/267] Linking target lib/librte_telemetry.so.24.1 00:02:50.779 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:50.779 [36/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:50.779 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:50.779 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:50.779 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:50.779 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:50.779 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:50.779 [42/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:50.779 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:50.779 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:51.040 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:51.040 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:51.040 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:51.040 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:51.040 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:51.300 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:51.300 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:51.300 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:51.300 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:51.300 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:51.300 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:51.300 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:51.300 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:51.300 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:51.562 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:51.562 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:51.562 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:51.562 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:51.562 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:51.823 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:51.823 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:51.823 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:51.823 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:51.823 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:52.085 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:52.085 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:52.085 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:52.085 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:52.085 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:52.085 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:52.085 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.085 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:52.085 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:52.344 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:52.344 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:52.344 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:52.344 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:52.344 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:52.605 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:52.605 [84/267] Linking static target lib/librte_ring.a 00:02:52.605 [85/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:52.605 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:52.605 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:52.605 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:52.605 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:52.865 [90/267] Linking static target lib/librte_eal.a 00:02:52.865 [91/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:52.865 [92/267] Linking static target lib/librte_rcu.a 00:02:52.865 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:52.865 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:52.865 [95/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.151 [96/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:53.151 [97/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:53.151 [98/267] Linking static target lib/librte_mempool.a 00:02:53.151 [99/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:53.151 [100/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:53.151 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:53.151 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.151 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:53.151 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:53.412 [105/267] Linking static target lib/librte_mbuf.a 00:02:53.412 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:53.412 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:53.412 [108/267] Linking static target lib/librte_meter.a 00:02:53.412 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:53.412 [110/267] Linking static target lib/librte_net.a 00:02:53.412 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:53.671 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:53.671 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:53.671 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:53.671 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.671 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.930 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:53.930 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:53.930 [119/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.930 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:54.189 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:54.189 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.189 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:54.447 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:54.447 [125/267] Linking static target lib/librte_pci.a 00:02:54.447 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:54.447 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:54.447 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:54.447 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:54.447 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:54.447 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:54.704 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:54.704 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:54.704 [134/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.704 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:54.704 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:54.704 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:54.704 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:54.704 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:54.704 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:54.704 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:54.704 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:54.704 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:54.704 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:54.963 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:54.963 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:54.963 [147/267] Linking static target lib/librte_cmdline.a 00:02:54.963 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:54.963 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:55.221 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:55.221 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:55.221 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:55.221 [153/267] Linking static target lib/librte_ethdev.a 00:02:55.221 [154/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:55.221 [155/267] Linking static target lib/librte_timer.a 00:02:55.221 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:55.221 [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:55.221 [158/267] Linking static target lib/librte_compressdev.a 00:02:55.480 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:55.480 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:55.480 [161/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:55.480 [162/267] Linking static target lib/librte_hash.a 00:02:55.480 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:55.739 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:55.739 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:55.739 [166/267] Linking static target lib/librte_dmadev.a 00:02:55.739 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.739 [168/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:55.739 [169/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:55.997 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:55.997 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:55.997 [172/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.256 [173/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:56.256 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:56.256 [175/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.256 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:56.256 [177/267] Linking static target lib/librte_cryptodev.a 00:02:56.256 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:56.256 [179/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.256 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:56.514 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:56.514 [182/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.514 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:56.514 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:56.514 [185/267] Linking static target lib/librte_power.a 00:02:56.773 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:56.773 [187/267] Linking static target lib/librte_reorder.a 00:02:56.773 [188/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:56.773 [189/267] Linking static target lib/librte_security.a 00:02:56.773 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:56.773 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:56.773 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:57.032 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:57.032 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.290 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.290 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:57.548 [197/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.548 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:57.548 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:57.548 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:57.806 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:57.806 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:57.806 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:57.806 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:57.806 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:57.806 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:57.806 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:58.064 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:58.064 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:58.064 [210/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:58.064 [211/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.064 [212/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.064 [213/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.064 [214/267] Linking static target drivers/librte_bus_pci.a 00:02:58.064 [215/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:58.323 [216/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.323 [217/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.323 [218/267] Linking static target drivers/librte_bus_vdev.a 00:02:58.323 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:58.323 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:58.323 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.323 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.581 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.581 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.581 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:58.581 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.519 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:00.091 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.091 [229/267] Linking target lib/librte_eal.so.24.1 00:03:00.091 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:00.091 [231/267] Linking target lib/librte_dmadev.so.24.1 00:03:00.091 [232/267] Linking target lib/librte_pci.so.24.1 00:03:00.091 [233/267] Linking target lib/librte_timer.so.24.1 00:03:00.091 [234/267] Linking target lib/librte_meter.so.24.1 00:03:00.091 [235/267] Linking target lib/librte_ring.so.24.1 00:03:00.091 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:00.351 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:00.351 [238/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:00.351 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:00.351 [240/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:00.352 [241/267] Linking target lib/librte_rcu.so.24.1 00:03:00.352 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:00.352 [243/267] Linking target lib/librte_mempool.so.24.1 00:03:00.352 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:00.352 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:00.352 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:00.612 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:00.612 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:00.612 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:00.612 [250/267] Linking target lib/librte_compressdev.so.24.1 00:03:00.612 [251/267] Linking target lib/librte_reorder.so.24.1 00:03:00.612 [252/267] Linking target lib/librte_net.so.24.1 00:03:00.612 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:03:00.612 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:00.612 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:00.612 [256/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.874 [257/267] Linking target lib/librte_hash.so.24.1 00:03:00.874 [258/267] Linking target lib/librte_cmdline.so.24.1 00:03:00.874 [259/267] Linking target lib/librte_security.so.24.1 00:03:00.874 [260/267] Linking target lib/librte_ethdev.so.24.1 00:03:00.874 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:00.874 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:00.874 [263/267] Linking target lib/librte_power.so.24.1 00:03:01.898 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:01.898 [265/267] Linking static target lib/librte_vhost.a 00:03:02.839 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.839 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:02.839 INFO: autodetecting backend as ninja 00:03:02.839 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:17.736 CC lib/ut/ut.o 00:03:17.736 CC lib/log/log_flags.o 00:03:17.736 CC lib/log/log.o 00:03:17.736 CC lib/ut_mock/mock.o 00:03:17.736 CC lib/log/log_deprecated.o 00:03:17.736 LIB libspdk_ut.a 00:03:17.736 LIB libspdk_log.a 00:03:17.736 LIB libspdk_ut_mock.a 00:03:17.736 SO libspdk_ut.so.2.0 00:03:17.736 SO libspdk_ut_mock.so.6.0 00:03:17.736 SO libspdk_log.so.7.1 00:03:17.736 SYMLINK libspdk_ut.so 00:03:17.736 SYMLINK libspdk_ut_mock.so 00:03:17.736 SYMLINK libspdk_log.so 00:03:17.736 CC lib/ioat/ioat.o 00:03:17.736 CXX lib/trace_parser/trace.o 00:03:17.736 CC lib/util/bit_array.o 00:03:17.736 CC lib/util/base64.o 00:03:17.736 CC lib/util/crc16.o 00:03:17.736 CC lib/util/cpuset.o 00:03:17.736 CC lib/util/crc32.o 00:03:17.736 CC lib/util/crc32c.o 00:03:17.736 CC lib/dma/dma.o 00:03:17.736 CC lib/vfio_user/host/vfio_user_pci.o 00:03:17.736 CC lib/util/crc32_ieee.o 00:03:17.736 CC lib/util/crc64.o 00:03:17.736 CC lib/util/dif.o 00:03:17.736 CC lib/vfio_user/host/vfio_user.o 00:03:17.736 LIB libspdk_dma.a 00:03:17.736 SO libspdk_dma.so.5.0 00:03:17.736 CC lib/util/fd.o 00:03:17.736 CC lib/util/fd_group.o 00:03:17.736 CC lib/util/file.o 00:03:17.736 LIB libspdk_ioat.a 00:03:17.736 CC lib/util/hexlify.o 00:03:17.736 SO libspdk_ioat.so.7.0 00:03:17.736 SYMLINK libspdk_dma.so 00:03:17.736 CC lib/util/iov.o 00:03:17.736 CC lib/util/math.o 00:03:17.736 SYMLINK libspdk_ioat.so 00:03:17.736 CC lib/util/net.o 00:03:17.736 CC lib/util/pipe.o 00:03:17.736 LIB libspdk_vfio_user.a 00:03:17.736 SO libspdk_vfio_user.so.5.0 00:03:17.736 CC lib/util/strerror_tls.o 00:03:17.736 CC lib/util/string.o 00:03:17.736 CC lib/util/uuid.o 00:03:17.736 CC lib/util/xor.o 00:03:17.736 CC lib/util/zipf.o 00:03:17.736 SYMLINK libspdk_vfio_user.so 00:03:17.736 CC lib/util/md5.o 00:03:17.994 LIB libspdk_util.a 00:03:17.994 SO libspdk_util.so.10.0 00:03:18.253 SYMLINK libspdk_util.so 00:03:18.253 LIB libspdk_trace_parser.a 00:03:18.253 SO libspdk_trace_parser.so.6.0 00:03:18.253 CC lib/vmd/vmd.o 00:03:18.253 CC lib/rdma_provider/common.o 00:03:18.253 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:18.253 CC lib/vmd/led.o 00:03:18.253 CC lib/rdma_utils/rdma_utils.o 00:03:18.253 CC lib/json/json_parse.o 00:03:18.253 CC lib/conf/conf.o 00:03:18.253 CC lib/idxd/idxd.o 00:03:18.253 CC lib/env_dpdk/env.o 00:03:18.253 SYMLINK libspdk_trace_parser.so 00:03:18.253 CC lib/idxd/idxd_user.o 00:03:18.511 CC lib/json/json_util.o 00:03:18.511 CC lib/json/json_write.o 00:03:18.511 LIB libspdk_rdma_provider.a 00:03:18.511 SO libspdk_rdma_provider.so.6.0 00:03:18.511 LIB libspdk_conf.a 00:03:18.511 CC lib/idxd/idxd_kernel.o 00:03:18.511 SYMLINK libspdk_rdma_provider.so 00:03:18.511 CC lib/env_dpdk/memory.o 00:03:18.511 SO libspdk_conf.so.6.0 00:03:18.511 LIB libspdk_rdma_utils.a 00:03:18.511 SO libspdk_rdma_utils.so.1.0 00:03:18.511 SYMLINK libspdk_conf.so 00:03:18.511 CC lib/env_dpdk/pci.o 00:03:18.511 CC lib/env_dpdk/init.o 00:03:18.511 SYMLINK libspdk_rdma_utils.so 00:03:18.511 CC lib/env_dpdk/threads.o 00:03:18.511 CC lib/env_dpdk/pci_ioat.o 00:03:18.769 CC lib/env_dpdk/pci_virtio.o 00:03:18.769 LIB libspdk_json.a 00:03:18.769 SO libspdk_json.so.6.0 00:03:18.769 CC lib/env_dpdk/pci_vmd.o 00:03:18.769 CC lib/env_dpdk/pci_idxd.o 00:03:18.769 LIB libspdk_idxd.a 00:03:18.769 CC lib/env_dpdk/pci_event.o 00:03:18.769 SYMLINK libspdk_json.so 00:03:18.769 SO libspdk_idxd.so.12.1 00:03:18.769 LIB libspdk_vmd.a 00:03:18.769 CC lib/env_dpdk/sigbus_handler.o 00:03:18.769 CC lib/env_dpdk/pci_dpdk.o 00:03:18.769 SYMLINK libspdk_idxd.so 00:03:18.769 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:18.769 SO libspdk_vmd.so.6.0 00:03:19.027 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:19.027 CC lib/jsonrpc/jsonrpc_server.o 00:03:19.027 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:19.027 SYMLINK libspdk_vmd.so 00:03:19.027 CC lib/jsonrpc/jsonrpc_client.o 00:03:19.027 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:19.285 LIB libspdk_jsonrpc.a 00:03:19.285 SO libspdk_jsonrpc.so.6.0 00:03:19.285 SYMLINK libspdk_jsonrpc.so 00:03:19.544 CC lib/rpc/rpc.o 00:03:19.802 LIB libspdk_env_dpdk.a 00:03:19.802 LIB libspdk_rpc.a 00:03:19.802 SO libspdk_rpc.so.6.0 00:03:19.802 SO libspdk_env_dpdk.so.15.0 00:03:19.802 SYMLINK libspdk_rpc.so 00:03:19.802 SYMLINK libspdk_env_dpdk.so 00:03:20.061 CC lib/trace/trace_flags.o 00:03:20.061 CC lib/trace/trace_rpc.o 00:03:20.061 CC lib/trace/trace.o 00:03:20.061 CC lib/keyring/keyring.o 00:03:20.061 CC lib/keyring/keyring_rpc.o 00:03:20.061 CC lib/notify/notify_rpc.o 00:03:20.061 CC lib/notify/notify.o 00:03:20.061 LIB libspdk_notify.a 00:03:20.061 SO libspdk_notify.so.6.0 00:03:20.061 SYMLINK libspdk_notify.so 00:03:20.061 LIB libspdk_keyring.a 00:03:20.318 LIB libspdk_trace.a 00:03:20.318 SO libspdk_keyring.so.2.0 00:03:20.318 SO libspdk_trace.so.11.0 00:03:20.318 SYMLINK libspdk_keyring.so 00:03:20.318 SYMLINK libspdk_trace.so 00:03:20.576 CC lib/thread/thread.o 00:03:20.576 CC lib/thread/iobuf.o 00:03:20.576 CC lib/sock/sock.o 00:03:20.576 CC lib/sock/sock_rpc.o 00:03:20.834 LIB libspdk_sock.a 00:03:20.834 SO libspdk_sock.so.10.0 00:03:21.091 SYMLINK libspdk_sock.so 00:03:21.348 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:21.348 CC lib/nvme/nvme_ns_cmd.o 00:03:21.348 CC lib/nvme/nvme_ctrlr.o 00:03:21.348 CC lib/nvme/nvme_fabric.o 00:03:21.348 CC lib/nvme/nvme_pcie.o 00:03:21.348 CC lib/nvme/nvme_ns.o 00:03:21.348 CC lib/nvme/nvme_pcie_common.o 00:03:21.348 CC lib/nvme/nvme_qpair.o 00:03:21.348 CC lib/nvme/nvme.o 00:03:21.605 CC lib/nvme/nvme_quirks.o 00:03:21.863 CC lib/nvme/nvme_transport.o 00:03:21.863 CC lib/nvme/nvme_discovery.o 00:03:21.863 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:21.863 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:22.138 LIB libspdk_thread.a 00:03:22.138 CC lib/nvme/nvme_tcp.o 00:03:22.138 SO libspdk_thread.so.10.2 00:03:22.138 CC lib/nvme/nvme_opal.o 00:03:22.138 CC lib/nvme/nvme_io_msg.o 00:03:22.138 SYMLINK libspdk_thread.so 00:03:22.138 CC lib/nvme/nvme_poll_group.o 00:03:22.138 CC lib/nvme/nvme_zns.o 00:03:22.406 CC lib/nvme/nvme_stubs.o 00:03:22.406 CC lib/nvme/nvme_auth.o 00:03:22.406 CC lib/nvme/nvme_cuse.o 00:03:22.406 CC lib/nvme/nvme_rdma.o 00:03:22.664 CC lib/accel/accel.o 00:03:22.664 CC lib/blob/blobstore.o 00:03:22.664 CC lib/init/json_config.o 00:03:22.664 CC lib/init/subsystem.o 00:03:22.664 CC lib/virtio/virtio.o 00:03:22.921 CC lib/virtio/virtio_vhost_user.o 00:03:22.921 CC lib/init/subsystem_rpc.o 00:03:23.179 CC lib/blob/request.o 00:03:23.179 CC lib/init/rpc.o 00:03:23.179 CC lib/virtio/virtio_vfio_user.o 00:03:23.179 CC lib/virtio/virtio_pci.o 00:03:23.436 LIB libspdk_init.a 00:03:23.436 SO libspdk_init.so.6.0 00:03:23.436 CC lib/blob/zeroes.o 00:03:23.436 CC lib/blob/blob_bs_dev.o 00:03:23.436 SYMLINK libspdk_init.so 00:03:23.436 CC lib/accel/accel_rpc.o 00:03:23.436 CC lib/fsdev/fsdev.o 00:03:23.436 CC lib/accel/accel_sw.o 00:03:23.436 CC lib/event/app.o 00:03:23.436 CC lib/fsdev/fsdev_io.o 00:03:23.436 LIB libspdk_virtio.a 00:03:23.436 CC lib/fsdev/fsdev_rpc.o 00:03:23.693 LIB libspdk_nvme.a 00:03:23.693 SO libspdk_virtio.so.7.0 00:03:23.693 CC lib/event/reactor.o 00:03:23.694 CC lib/event/log_rpc.o 00:03:23.694 CC lib/event/app_rpc.o 00:03:23.694 SYMLINK libspdk_virtio.so 00:03:23.694 CC lib/event/scheduler_static.o 00:03:23.694 LIB libspdk_accel.a 00:03:23.694 SO libspdk_nvme.so.14.0 00:03:23.694 SO libspdk_accel.so.16.0 00:03:23.952 SYMLINK libspdk_accel.so 00:03:23.952 LIB libspdk_fsdev.a 00:03:23.952 SYMLINK libspdk_nvme.so 00:03:23.952 SO libspdk_fsdev.so.1.0 00:03:23.952 LIB libspdk_event.a 00:03:23.952 CC lib/bdev/bdev.o 00:03:23.952 CC lib/bdev/bdev_zone.o 00:03:23.952 CC lib/bdev/bdev_rpc.o 00:03:23.952 CC lib/bdev/scsi_nvme.o 00:03:23.952 CC lib/bdev/part.o 00:03:24.209 SYMLINK libspdk_fsdev.so 00:03:24.209 SO libspdk_event.so.14.0 00:03:24.209 SYMLINK libspdk_event.so 00:03:24.209 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:25.143 LIB libspdk_fuse_dispatcher.a 00:03:25.143 SO libspdk_fuse_dispatcher.so.1.0 00:03:25.143 SYMLINK libspdk_fuse_dispatcher.so 00:03:25.711 LIB libspdk_blob.a 00:03:25.711 SO libspdk_blob.so.11.0 00:03:25.711 SYMLINK libspdk_blob.so 00:03:25.970 CC lib/lvol/lvol.o 00:03:25.970 CC lib/blobfs/tree.o 00:03:25.970 CC lib/blobfs/blobfs.o 00:03:26.906 LIB libspdk_lvol.a 00:03:26.906 SO libspdk_lvol.so.10.0 00:03:26.906 LIB libspdk_bdev.a 00:03:26.906 SYMLINK libspdk_lvol.so 00:03:26.906 SO libspdk_bdev.so.17.0 00:03:26.906 LIB libspdk_blobfs.a 00:03:26.906 SO libspdk_blobfs.so.10.0 00:03:26.906 SYMLINK libspdk_bdev.so 00:03:26.906 SYMLINK libspdk_blobfs.so 00:03:27.164 CC lib/nbd/nbd_rpc.o 00:03:27.164 CC lib/nbd/nbd.o 00:03:27.164 CC lib/nvmf/ctrlr.o 00:03:27.164 CC lib/nvmf/ctrlr_discovery.o 00:03:27.164 CC lib/nvmf/ctrlr_bdev.o 00:03:27.164 CC lib/nvmf/nvmf.o 00:03:27.164 CC lib/nvmf/subsystem.o 00:03:27.164 CC lib/ublk/ublk.o 00:03:27.164 CC lib/ftl/ftl_core.o 00:03:27.164 CC lib/scsi/dev.o 00:03:27.164 CC lib/ftl/ftl_init.o 00:03:27.164 CC lib/scsi/lun.o 00:03:27.422 CC lib/scsi/port.o 00:03:27.422 CC lib/nvmf/nvmf_rpc.o 00:03:27.422 LIB libspdk_nbd.a 00:03:27.422 CC lib/ftl/ftl_layout.o 00:03:27.422 SO libspdk_nbd.so.7.0 00:03:27.422 SYMLINK libspdk_nbd.so 00:03:27.422 CC lib/ftl/ftl_debug.o 00:03:27.422 CC lib/scsi/scsi.o 00:03:27.422 CC lib/scsi/scsi_bdev.o 00:03:27.681 CC lib/nvmf/transport.o 00:03:27.681 CC lib/nvmf/tcp.o 00:03:27.681 CC lib/nvmf/stubs.o 00:03:27.681 CC lib/ftl/ftl_io.o 00:03:27.681 CC lib/ublk/ublk_rpc.o 00:03:27.939 LIB libspdk_ublk.a 00:03:27.939 CC lib/ftl/ftl_sb.o 00:03:27.939 CC lib/ftl/ftl_l2p.o 00:03:27.939 SO libspdk_ublk.so.3.0 00:03:27.939 SYMLINK libspdk_ublk.so 00:03:27.939 CC lib/ftl/ftl_l2p_flat.o 00:03:27.939 CC lib/scsi/scsi_pr.o 00:03:27.939 CC lib/nvmf/mdns_server.o 00:03:27.939 CC lib/ftl/ftl_nv_cache.o 00:03:28.198 CC lib/ftl/ftl_band.o 00:03:28.198 CC lib/ftl/ftl_band_ops.o 00:03:28.198 CC lib/nvmf/rdma.o 00:03:28.198 CC lib/nvmf/auth.o 00:03:28.198 CC lib/ftl/ftl_writer.o 00:03:28.198 CC lib/scsi/scsi_rpc.o 00:03:28.456 CC lib/ftl/ftl_rq.o 00:03:28.456 CC lib/ftl/ftl_reloc.o 00:03:28.456 CC lib/scsi/task.o 00:03:28.456 CC lib/ftl/ftl_l2p_cache.o 00:03:28.456 CC lib/ftl/ftl_p2l.o 00:03:28.456 CC lib/ftl/ftl_p2l_log.o 00:03:28.456 CC lib/ftl/mngt/ftl_mngt.o 00:03:28.456 LIB libspdk_scsi.a 00:03:28.714 SO libspdk_scsi.so.9.0 00:03:28.714 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:28.714 SYMLINK libspdk_scsi.so 00:03:28.714 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:28.714 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:28.714 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:28.714 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:28.972 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:28.972 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:28.972 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:28.972 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:28.972 CC lib/iscsi/conn.o 00:03:28.972 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:28.972 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:28.972 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:29.230 CC lib/ftl/utils/ftl_conf.o 00:03:29.230 CC lib/ftl/utils/ftl_md.o 00:03:29.230 CC lib/ftl/utils/ftl_mempool.o 00:03:29.230 CC lib/vhost/vhost.o 00:03:29.230 CC lib/ftl/utils/ftl_bitmap.o 00:03:29.230 CC lib/vhost/vhost_rpc.o 00:03:29.230 CC lib/vhost/vhost_scsi.o 00:03:29.230 CC lib/ftl/utils/ftl_property.o 00:03:29.230 CC lib/vhost/vhost_blk.o 00:03:29.230 CC lib/vhost/rte_vhost_user.o 00:03:29.230 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:29.488 CC lib/iscsi/init_grp.o 00:03:29.488 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:29.488 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:29.488 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:29.488 CC lib/iscsi/iscsi.o 00:03:29.746 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:29.746 CC lib/iscsi/param.o 00:03:29.746 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:29.746 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:29.746 CC lib/iscsi/portal_grp.o 00:03:29.746 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:29.746 CC lib/iscsi/tgt_node.o 00:03:29.746 LIB libspdk_nvmf.a 00:03:30.004 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:30.004 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:30.004 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:30.004 SO libspdk_nvmf.so.19.1 00:03:30.004 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:30.004 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:30.004 SYMLINK libspdk_nvmf.so 00:03:30.004 CC lib/ftl/base/ftl_base_dev.o 00:03:30.004 CC lib/ftl/base/ftl_base_bdev.o 00:03:30.004 CC lib/iscsi/iscsi_subsystem.o 00:03:30.261 CC lib/ftl/ftl_trace.o 00:03:30.261 CC lib/iscsi/iscsi_rpc.o 00:03:30.261 CC lib/iscsi/task.o 00:03:30.261 LIB libspdk_vhost.a 00:03:30.261 LIB libspdk_ftl.a 00:03:30.261 SO libspdk_vhost.so.8.0 00:03:30.519 SYMLINK libspdk_vhost.so 00:03:30.519 SO libspdk_ftl.so.9.0 00:03:30.777 SYMLINK libspdk_ftl.so 00:03:30.777 LIB libspdk_iscsi.a 00:03:30.777 SO libspdk_iscsi.so.8.0 00:03:31.034 SYMLINK libspdk_iscsi.so 00:03:31.292 CC module/env_dpdk/env_dpdk_rpc.o 00:03:31.292 CC module/sock/posix/posix.o 00:03:31.292 CC module/keyring/linux/keyring.o 00:03:31.292 CC module/keyring/file/keyring.o 00:03:31.292 CC module/accel/error/accel_error.o 00:03:31.292 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:31.292 CC module/scheduler/gscheduler/gscheduler.o 00:03:31.292 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:31.292 CC module/fsdev/aio/fsdev_aio.o 00:03:31.292 CC module/blob/bdev/blob_bdev.o 00:03:31.292 LIB libspdk_env_dpdk_rpc.a 00:03:31.292 SO libspdk_env_dpdk_rpc.so.6.0 00:03:31.550 SYMLINK libspdk_env_dpdk_rpc.so 00:03:31.550 CC module/accel/error/accel_error_rpc.o 00:03:31.550 LIB libspdk_scheduler_gscheduler.a 00:03:31.550 CC module/keyring/linux/keyring_rpc.o 00:03:31.550 SO libspdk_scheduler_gscheduler.so.4.0 00:03:31.550 CC module/keyring/file/keyring_rpc.o 00:03:31.550 LIB libspdk_scheduler_dpdk_governor.a 00:03:31.550 LIB libspdk_scheduler_dynamic.a 00:03:31.550 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:31.550 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:31.550 SO libspdk_scheduler_dynamic.so.4.0 00:03:31.550 SYMLINK libspdk_scheduler_gscheduler.so 00:03:31.550 CC module/fsdev/aio/linux_aio_mgr.o 00:03:31.550 LIB libspdk_accel_error.a 00:03:31.550 SYMLINK libspdk_scheduler_dynamic.so 00:03:31.550 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:31.550 SO libspdk_accel_error.so.2.0 00:03:31.550 LIB libspdk_keyring_linux.a 00:03:31.550 LIB libspdk_keyring_file.a 00:03:31.550 SO libspdk_keyring_linux.so.1.0 00:03:31.550 SO libspdk_keyring_file.so.2.0 00:03:31.550 SYMLINK libspdk_accel_error.so 00:03:31.550 LIB libspdk_blob_bdev.a 00:03:31.550 SO libspdk_blob_bdev.so.11.0 00:03:31.550 SYMLINK libspdk_keyring_linux.so 00:03:31.550 SYMLINK libspdk_keyring_file.so 00:03:31.550 CC module/accel/dsa/accel_dsa.o 00:03:31.550 CC module/accel/dsa/accel_dsa_rpc.o 00:03:31.550 CC module/accel/ioat/accel_ioat.o 00:03:31.550 CC module/accel/ioat/accel_ioat_rpc.o 00:03:31.550 SYMLINK libspdk_blob_bdev.so 00:03:31.807 CC module/accel/iaa/accel_iaa.o 00:03:31.807 CC module/accel/iaa/accel_iaa_rpc.o 00:03:31.807 LIB libspdk_accel_ioat.a 00:03:31.807 SO libspdk_accel_ioat.so.6.0 00:03:31.807 CC module/bdev/delay/vbdev_delay.o 00:03:31.807 CC module/blobfs/bdev/blobfs_bdev.o 00:03:31.807 CC module/bdev/gpt/gpt.o 00:03:31.807 CC module/bdev/error/vbdev_error.o 00:03:31.807 SYMLINK libspdk_accel_ioat.so 00:03:31.807 CC module/bdev/gpt/vbdev_gpt.o 00:03:32.065 LIB libspdk_sock_posix.a 00:03:32.065 LIB libspdk_accel_dsa.a 00:03:32.065 LIB libspdk_accel_iaa.a 00:03:32.065 SO libspdk_accel_dsa.so.5.0 00:03:32.065 SO libspdk_sock_posix.so.6.0 00:03:32.065 SO libspdk_accel_iaa.so.3.0 00:03:32.065 LIB libspdk_fsdev_aio.a 00:03:32.065 CC module/bdev/lvol/vbdev_lvol.o 00:03:32.065 SYMLINK libspdk_accel_dsa.so 00:03:32.065 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:32.065 SYMLINK libspdk_accel_iaa.so 00:03:32.065 SO libspdk_fsdev_aio.so.1.0 00:03:32.065 SYMLINK libspdk_sock_posix.so 00:03:32.065 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:32.065 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:32.065 CC module/bdev/error/vbdev_error_rpc.o 00:03:32.065 SYMLINK libspdk_fsdev_aio.so 00:03:32.065 LIB libspdk_bdev_gpt.a 00:03:32.065 CC module/bdev/malloc/bdev_malloc.o 00:03:32.324 LIB libspdk_blobfs_bdev.a 00:03:32.324 SO libspdk_bdev_gpt.so.6.0 00:03:32.324 LIB libspdk_bdev_error.a 00:03:32.324 LIB libspdk_bdev_delay.a 00:03:32.324 SO libspdk_blobfs_bdev.so.6.0 00:03:32.324 CC module/bdev/null/bdev_null.o 00:03:32.324 CC module/bdev/nvme/bdev_nvme.o 00:03:32.324 SO libspdk_bdev_error.so.6.0 00:03:32.324 SO libspdk_bdev_delay.so.6.0 00:03:32.324 SYMLINK libspdk_bdev_gpt.so 00:03:32.324 SYMLINK libspdk_blobfs_bdev.so 00:03:32.324 SYMLINK libspdk_bdev_error.so 00:03:32.324 SYMLINK libspdk_bdev_delay.so 00:03:32.324 CC module/bdev/passthru/vbdev_passthru.o 00:03:32.324 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:32.324 CC module/bdev/raid/bdev_raid.o 00:03:32.325 CC module/bdev/split/vbdev_split.o 00:03:32.325 CC module/bdev/xnvme/bdev_xnvme.o 00:03:32.325 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:32.582 CC module/bdev/null/bdev_null_rpc.o 00:03:32.582 LIB libspdk_bdev_lvol.a 00:03:32.582 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:32.582 SO libspdk_bdev_lvol.so.6.0 00:03:32.582 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:32.582 CC module/bdev/split/vbdev_split_rpc.o 00:03:32.582 SYMLINK libspdk_bdev_lvol.so 00:03:32.582 LIB libspdk_bdev_null.a 00:03:32.582 SO libspdk_bdev_null.so.6.0 00:03:32.582 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:32.582 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:32.582 LIB libspdk_bdev_malloc.a 00:03:32.840 SO libspdk_bdev_malloc.so.6.0 00:03:32.840 CC module/bdev/aio/bdev_aio.o 00:03:32.840 LIB libspdk_bdev_passthru.a 00:03:32.840 SYMLINK libspdk_bdev_null.so 00:03:32.840 LIB libspdk_bdev_split.a 00:03:32.840 SYMLINK libspdk_bdev_malloc.so 00:03:32.840 SO libspdk_bdev_passthru.so.6.0 00:03:32.840 SO libspdk_bdev_split.so.6.0 00:03:32.840 CC module/bdev/nvme/nvme_rpc.o 00:03:32.840 LIB libspdk_bdev_zone_block.a 00:03:32.840 LIB libspdk_bdev_xnvme.a 00:03:32.840 SO libspdk_bdev_zone_block.so.6.0 00:03:32.840 SO libspdk_bdev_xnvme.so.3.0 00:03:32.840 SYMLINK libspdk_bdev_passthru.so 00:03:32.840 SYMLINK libspdk_bdev_split.so 00:03:32.840 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.840 CC module/bdev/ftl/bdev_ftl.o 00:03:32.840 SYMLINK libspdk_bdev_xnvme.so 00:03:32.840 CC module/bdev/nvme/vbdev_opal.o 00:03:32.840 SYMLINK libspdk_bdev_zone_block.so 00:03:32.840 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:33.097 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:33.097 CC module/bdev/iscsi/bdev_iscsi.o 00:03:33.097 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:33.097 CC module/bdev/raid/bdev_raid_rpc.o 00:03:33.097 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:33.097 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:33.097 CC module/bdev/aio/bdev_aio_rpc.o 00:03:33.355 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:33.355 LIB libspdk_bdev_ftl.a 00:03:33.355 CC module/bdev/raid/bdev_raid_sb.o 00:03:33.355 CC module/bdev/raid/raid0.o 00:03:33.355 LIB libspdk_bdev_aio.a 00:03:33.355 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:33.355 SO libspdk_bdev_ftl.so.6.0 00:03:33.355 SO libspdk_bdev_aio.so.6.0 00:03:33.355 SYMLINK libspdk_bdev_ftl.so 00:03:33.355 CC module/bdev/raid/raid1.o 00:03:33.355 SYMLINK libspdk_bdev_aio.so 00:03:33.355 CC module/bdev/raid/concat.o 00:03:33.355 LIB libspdk_bdev_iscsi.a 00:03:33.355 SO libspdk_bdev_iscsi.so.6.0 00:03:33.612 SYMLINK libspdk_bdev_iscsi.so 00:03:33.612 LIB libspdk_bdev_virtio.a 00:03:33.612 LIB libspdk_bdev_raid.a 00:03:33.612 SO libspdk_bdev_virtio.so.6.0 00:03:33.612 SO libspdk_bdev_raid.so.6.0 00:03:33.612 SYMLINK libspdk_bdev_virtio.so 00:03:33.612 SYMLINK libspdk_bdev_raid.so 00:03:34.544 LIB libspdk_bdev_nvme.a 00:03:34.544 SO libspdk_bdev_nvme.so.7.0 00:03:34.544 SYMLINK libspdk_bdev_nvme.so 00:03:34.802 CC module/event/subsystems/fsdev/fsdev.o 00:03:34.802 CC module/event/subsystems/sock/sock.o 00:03:34.802 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:34.802 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:34.802 CC module/event/subsystems/iobuf/iobuf.o 00:03:34.802 CC module/event/subsystems/vmd/vmd.o 00:03:34.802 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:34.802 CC module/event/subsystems/scheduler/scheduler.o 00:03:34.802 CC module/event/subsystems/keyring/keyring.o 00:03:35.061 LIB libspdk_event_scheduler.a 00:03:35.061 LIB libspdk_event_fsdev.a 00:03:35.061 LIB libspdk_event_vhost_blk.a 00:03:35.061 LIB libspdk_event_vmd.a 00:03:35.061 LIB libspdk_event_keyring.a 00:03:35.061 SO libspdk_event_scheduler.so.4.0 00:03:35.061 LIB libspdk_event_sock.a 00:03:35.061 SO libspdk_event_fsdev.so.1.0 00:03:35.061 SO libspdk_event_vhost_blk.so.3.0 00:03:35.061 SO libspdk_event_vmd.so.6.0 00:03:35.061 LIB libspdk_event_iobuf.a 00:03:35.061 SO libspdk_event_keyring.so.1.0 00:03:35.061 SO libspdk_event_sock.so.5.0 00:03:35.061 SO libspdk_event_iobuf.so.3.0 00:03:35.061 SYMLINK libspdk_event_vhost_blk.so 00:03:35.061 SYMLINK libspdk_event_scheduler.so 00:03:35.061 SYMLINK libspdk_event_keyring.so 00:03:35.061 SYMLINK libspdk_event_fsdev.so 00:03:35.061 SYMLINK libspdk_event_sock.so 00:03:35.061 SYMLINK libspdk_event_vmd.so 00:03:35.061 SYMLINK libspdk_event_iobuf.so 00:03:35.318 CC module/event/subsystems/accel/accel.o 00:03:35.576 LIB libspdk_event_accel.a 00:03:35.576 SO libspdk_event_accel.so.6.0 00:03:35.576 SYMLINK libspdk_event_accel.so 00:03:35.834 CC module/event/subsystems/bdev/bdev.o 00:03:35.834 LIB libspdk_event_bdev.a 00:03:35.834 SO libspdk_event_bdev.so.6.0 00:03:36.092 SYMLINK libspdk_event_bdev.so 00:03:36.092 CC module/event/subsystems/ublk/ublk.o 00:03:36.092 CC module/event/subsystems/scsi/scsi.o 00:03:36.092 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:36.092 CC module/event/subsystems/nbd/nbd.o 00:03:36.092 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:36.350 LIB libspdk_event_ublk.a 00:03:36.350 LIB libspdk_event_nbd.a 00:03:36.350 SO libspdk_event_ublk.so.3.0 00:03:36.350 LIB libspdk_event_scsi.a 00:03:36.350 SO libspdk_event_nbd.so.6.0 00:03:36.350 SO libspdk_event_scsi.so.6.0 00:03:36.350 SYMLINK libspdk_event_ublk.so 00:03:36.350 SYMLINK libspdk_event_nbd.so 00:03:36.350 SYMLINK libspdk_event_scsi.so 00:03:36.350 LIB libspdk_event_nvmf.a 00:03:36.350 SO libspdk_event_nvmf.so.6.0 00:03:36.350 SYMLINK libspdk_event_nvmf.so 00:03:36.607 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:36.607 CC module/event/subsystems/iscsi/iscsi.o 00:03:36.607 LIB libspdk_event_vhost_scsi.a 00:03:36.607 LIB libspdk_event_iscsi.a 00:03:36.607 SO libspdk_event_vhost_scsi.so.3.0 00:03:36.607 SO libspdk_event_iscsi.so.6.0 00:03:36.865 SYMLINK libspdk_event_vhost_scsi.so 00:03:36.865 SYMLINK libspdk_event_iscsi.so 00:03:36.865 SO libspdk.so.6.0 00:03:36.865 SYMLINK libspdk.so 00:03:37.122 CXX app/trace/trace.o 00:03:37.122 CC app/trace_record/trace_record.o 00:03:37.122 CC app/iscsi_tgt/iscsi_tgt.o 00:03:37.122 CC examples/ioat/perf/perf.o 00:03:37.122 CC app/nvmf_tgt/nvmf_main.o 00:03:37.122 CC test/thread/poller_perf/poller_perf.o 00:03:37.122 CC examples/util/zipf/zipf.o 00:03:37.122 CC app/spdk_tgt/spdk_tgt.o 00:03:37.122 CC test/dma/test_dma/test_dma.o 00:03:37.122 CC test/app/bdev_svc/bdev_svc.o 00:03:37.380 LINK poller_perf 00:03:37.380 LINK nvmf_tgt 00:03:37.380 LINK zipf 00:03:37.380 LINK iscsi_tgt 00:03:37.380 LINK spdk_trace_record 00:03:37.380 LINK ioat_perf 00:03:37.380 LINK spdk_tgt 00:03:37.380 LINK bdev_svc 00:03:37.380 LINK spdk_trace 00:03:37.380 TEST_HEADER include/spdk/accel.h 00:03:37.380 TEST_HEADER include/spdk/accel_module.h 00:03:37.380 TEST_HEADER include/spdk/assert.h 00:03:37.380 TEST_HEADER include/spdk/barrier.h 00:03:37.380 TEST_HEADER include/spdk/base64.h 00:03:37.380 TEST_HEADER include/spdk/bdev.h 00:03:37.380 TEST_HEADER include/spdk/bdev_module.h 00:03:37.380 TEST_HEADER include/spdk/bdev_zone.h 00:03:37.380 TEST_HEADER include/spdk/bit_array.h 00:03:37.380 TEST_HEADER include/spdk/bit_pool.h 00:03:37.380 TEST_HEADER include/spdk/blob_bdev.h 00:03:37.380 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:37.380 TEST_HEADER include/spdk/blobfs.h 00:03:37.380 CC app/spdk_lspci/spdk_lspci.o 00:03:37.380 TEST_HEADER include/spdk/blob.h 00:03:37.380 TEST_HEADER include/spdk/conf.h 00:03:37.380 TEST_HEADER include/spdk/config.h 00:03:37.380 TEST_HEADER include/spdk/cpuset.h 00:03:37.380 CC examples/ioat/verify/verify.o 00:03:37.380 CC test/app/histogram_perf/histogram_perf.o 00:03:37.638 TEST_HEADER include/spdk/crc16.h 00:03:37.638 TEST_HEADER include/spdk/crc32.h 00:03:37.638 TEST_HEADER include/spdk/crc64.h 00:03:37.638 TEST_HEADER include/spdk/dif.h 00:03:37.638 TEST_HEADER include/spdk/dma.h 00:03:37.638 TEST_HEADER include/spdk/endian.h 00:03:37.638 TEST_HEADER include/spdk/env_dpdk.h 00:03:37.638 TEST_HEADER include/spdk/env.h 00:03:37.638 TEST_HEADER include/spdk/event.h 00:03:37.638 TEST_HEADER include/spdk/fd_group.h 00:03:37.638 TEST_HEADER include/spdk/fd.h 00:03:37.638 TEST_HEADER include/spdk/file.h 00:03:37.638 TEST_HEADER include/spdk/fsdev.h 00:03:37.638 TEST_HEADER include/spdk/fsdev_module.h 00:03:37.638 TEST_HEADER include/spdk/ftl.h 00:03:37.638 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:37.638 TEST_HEADER include/spdk/gpt_spec.h 00:03:37.638 TEST_HEADER include/spdk/hexlify.h 00:03:37.638 TEST_HEADER include/spdk/histogram_data.h 00:03:37.638 CC test/app/jsoncat/jsoncat.o 00:03:37.638 TEST_HEADER include/spdk/idxd.h 00:03:37.638 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:37.638 TEST_HEADER include/spdk/idxd_spec.h 00:03:37.638 TEST_HEADER include/spdk/init.h 00:03:37.638 TEST_HEADER include/spdk/ioat.h 00:03:37.638 TEST_HEADER include/spdk/ioat_spec.h 00:03:37.638 TEST_HEADER include/spdk/iscsi_spec.h 00:03:37.638 TEST_HEADER include/spdk/json.h 00:03:37.638 TEST_HEADER include/spdk/jsonrpc.h 00:03:37.638 TEST_HEADER include/spdk/keyring.h 00:03:37.638 TEST_HEADER include/spdk/keyring_module.h 00:03:37.638 TEST_HEADER include/spdk/likely.h 00:03:37.638 TEST_HEADER include/spdk/log.h 00:03:37.638 TEST_HEADER include/spdk/lvol.h 00:03:37.638 TEST_HEADER include/spdk/md5.h 00:03:37.638 TEST_HEADER include/spdk/memory.h 00:03:37.638 TEST_HEADER include/spdk/mmio.h 00:03:37.638 TEST_HEADER include/spdk/nbd.h 00:03:37.638 TEST_HEADER include/spdk/net.h 00:03:37.638 TEST_HEADER include/spdk/notify.h 00:03:37.638 TEST_HEADER include/spdk/nvme.h 00:03:37.638 TEST_HEADER include/spdk/nvme_intel.h 00:03:37.638 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:37.638 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:37.638 TEST_HEADER include/spdk/nvme_spec.h 00:03:37.638 TEST_HEADER include/spdk/nvme_zns.h 00:03:37.638 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:37.638 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:37.638 TEST_HEADER include/spdk/nvmf.h 00:03:37.638 TEST_HEADER include/spdk/nvmf_spec.h 00:03:37.638 CC test/app/stub/stub.o 00:03:37.638 TEST_HEADER include/spdk/nvmf_transport.h 00:03:37.638 TEST_HEADER include/spdk/opal.h 00:03:37.638 TEST_HEADER include/spdk/opal_spec.h 00:03:37.638 TEST_HEADER include/spdk/pci_ids.h 00:03:37.638 TEST_HEADER include/spdk/pipe.h 00:03:37.638 TEST_HEADER include/spdk/queue.h 00:03:37.638 TEST_HEADER include/spdk/reduce.h 00:03:37.638 TEST_HEADER include/spdk/rpc.h 00:03:37.638 TEST_HEADER include/spdk/scheduler.h 00:03:37.638 TEST_HEADER include/spdk/scsi.h 00:03:37.638 TEST_HEADER include/spdk/scsi_spec.h 00:03:37.638 TEST_HEADER include/spdk/sock.h 00:03:37.638 TEST_HEADER include/spdk/stdinc.h 00:03:37.638 TEST_HEADER include/spdk/string.h 00:03:37.638 LINK test_dma 00:03:37.638 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:37.638 TEST_HEADER include/spdk/thread.h 00:03:37.638 TEST_HEADER include/spdk/trace.h 00:03:37.638 TEST_HEADER include/spdk/trace_parser.h 00:03:37.638 TEST_HEADER include/spdk/tree.h 00:03:37.638 LINK spdk_lspci 00:03:37.638 TEST_HEADER include/spdk/ublk.h 00:03:37.638 TEST_HEADER include/spdk/util.h 00:03:37.638 TEST_HEADER include/spdk/uuid.h 00:03:37.638 TEST_HEADER include/spdk/version.h 00:03:37.638 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:37.638 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:37.638 TEST_HEADER include/spdk/vhost.h 00:03:37.638 TEST_HEADER include/spdk/vmd.h 00:03:37.638 TEST_HEADER include/spdk/xor.h 00:03:37.638 TEST_HEADER include/spdk/zipf.h 00:03:37.638 CXX test/cpp_headers/accel.o 00:03:37.638 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:37.638 LINK histogram_perf 00:03:37.638 LINK jsoncat 00:03:37.638 LINK verify 00:03:37.638 LINK stub 00:03:37.896 CXX test/cpp_headers/accel_module.o 00:03:37.896 LINK interrupt_tgt 00:03:37.896 CC app/spdk_nvme_perf/perf.o 00:03:37.896 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:37.896 CXX test/cpp_headers/assert.o 00:03:37.896 CC examples/thread/thread/thread_ex.o 00:03:37.896 CC test/event/reactor/reactor.o 00:03:37.896 CC test/event/event_perf/event_perf.o 00:03:37.896 LINK nvme_fuzz 00:03:37.896 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:37.896 CC test/env/mem_callbacks/mem_callbacks.o 00:03:37.896 CC test/event/reactor_perf/reactor_perf.o 00:03:38.153 CXX test/cpp_headers/barrier.o 00:03:38.153 LINK reactor 00:03:38.153 LINK event_perf 00:03:38.153 CXX test/cpp_headers/base64.o 00:03:38.153 LINK reactor_perf 00:03:38.153 LINK thread 00:03:38.153 CXX test/cpp_headers/bdev.o 00:03:38.153 CC test/env/vtophys/vtophys.o 00:03:38.411 CC examples/sock/hello_world/hello_sock.o 00:03:38.411 CC test/event/app_repeat/app_repeat.o 00:03:38.411 CC examples/vmd/lsvmd/lsvmd.o 00:03:38.411 LINK vhost_fuzz 00:03:38.411 LINK vtophys 00:03:38.411 CXX test/cpp_headers/bdev_module.o 00:03:38.411 CC test/event/scheduler/scheduler.o 00:03:38.411 LINK lsvmd 00:03:38.411 LINK mem_callbacks 00:03:38.411 LINK app_repeat 00:03:38.668 LINK hello_sock 00:03:38.668 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:38.668 CXX test/cpp_headers/bdev_zone.o 00:03:38.668 CXX test/cpp_headers/bit_array.o 00:03:38.668 CC test/nvme/aer/aer.o 00:03:38.668 LINK spdk_nvme_perf 00:03:38.668 CC examples/vmd/led/led.o 00:03:38.668 LINK scheduler 00:03:38.668 CC test/nvme/reset/reset.o 00:03:38.668 CXX test/cpp_headers/bit_pool.o 00:03:38.668 LINK env_dpdk_post_init 00:03:38.926 CC test/nvme/sgl/sgl.o 00:03:38.926 LINK led 00:03:38.926 CC test/nvme/e2edp/nvme_dp.o 00:03:38.926 CXX test/cpp_headers/blob_bdev.o 00:03:38.926 CC app/spdk_nvme_identify/identify.o 00:03:38.926 LINK aer 00:03:38.926 CC app/spdk_nvme_discover/discovery_aer.o 00:03:38.926 LINK reset 00:03:38.926 CC test/env/memory/memory_ut.o 00:03:39.184 CXX test/cpp_headers/blobfs_bdev.o 00:03:39.184 LINK sgl 00:03:39.184 LINK nvme_dp 00:03:39.184 CC examples/idxd/perf/perf.o 00:03:39.184 CXX test/cpp_headers/blobfs.o 00:03:39.184 CC test/nvme/overhead/overhead.o 00:03:39.184 LINK spdk_nvme_discover 00:03:39.184 CXX test/cpp_headers/blob.o 00:03:39.441 LINK iscsi_fuzz 00:03:39.441 CXX test/cpp_headers/conf.o 00:03:39.441 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:39.441 CC examples/accel/perf/accel_perf.o 00:03:39.441 LINK overhead 00:03:39.441 CC examples/nvme/hello_world/hello_world.o 00:03:39.441 CC examples/blob/hello_world/hello_blob.o 00:03:39.441 LINK idxd_perf 00:03:39.441 CXX test/cpp_headers/config.o 00:03:39.441 CXX test/cpp_headers/cpuset.o 00:03:39.699 CC test/rpc_client/rpc_client_test.o 00:03:39.699 CC test/nvme/err_injection/err_injection.o 00:03:39.699 LINK hello_world 00:03:39.699 LINK hello_fsdev 00:03:39.699 CXX test/cpp_headers/crc16.o 00:03:39.699 CC test/nvme/startup/startup.o 00:03:39.699 LINK hello_blob 00:03:39.699 LINK spdk_nvme_identify 00:03:39.699 LINK rpc_client_test 00:03:39.699 LINK err_injection 00:03:39.699 CXX test/cpp_headers/crc32.o 00:03:39.957 LINK startup 00:03:39.957 CC examples/nvme/reconnect/reconnect.o 00:03:39.957 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:39.957 CC app/spdk_top/spdk_top.o 00:03:39.957 LINK accel_perf 00:03:39.957 CXX test/cpp_headers/crc64.o 00:03:39.957 CC examples/blob/cli/blobcli.o 00:03:39.957 CC test/nvme/reserve/reserve.o 00:03:39.957 CXX test/cpp_headers/dif.o 00:03:39.957 CC test/accel/dif/dif.o 00:03:40.215 LINK memory_ut 00:03:40.215 CC test/blobfs/mkfs/mkfs.o 00:03:40.215 LINK reconnect 00:03:40.215 CXX test/cpp_headers/dma.o 00:03:40.215 LINK reserve 00:03:40.215 LINK mkfs 00:03:40.215 CXX test/cpp_headers/endian.o 00:03:40.215 CC test/env/pci/pci_ut.o 00:03:40.215 CC test/lvol/esnap/esnap.o 00:03:40.474 LINK nvme_manage 00:03:40.474 CXX test/cpp_headers/env_dpdk.o 00:03:40.474 LINK blobcli 00:03:40.474 CC examples/nvme/arbitration/arbitration.o 00:03:40.474 CC test/nvme/simple_copy/simple_copy.o 00:03:40.474 CC examples/nvme/hotplug/hotplug.o 00:03:40.474 CXX test/cpp_headers/env.o 00:03:40.732 CC test/nvme/connect_stress/connect_stress.o 00:03:40.732 CXX test/cpp_headers/event.o 00:03:40.732 LINK simple_copy 00:03:40.732 CC examples/bdev/hello_world/hello_bdev.o 00:03:40.732 LINK pci_ut 00:03:40.732 LINK hotplug 00:03:40.732 LINK arbitration 00:03:40.732 LINK dif 00:03:40.732 CXX test/cpp_headers/fd_group.o 00:03:40.732 LINK connect_stress 00:03:40.732 LINK spdk_top 00:03:40.990 CC test/nvme/boot_partition/boot_partition.o 00:03:40.990 LINK hello_bdev 00:03:40.990 CC test/nvme/compliance/nvme_compliance.o 00:03:40.990 CXX test/cpp_headers/fd.o 00:03:40.990 CXX test/cpp_headers/file.o 00:03:40.990 CC examples/nvme/abort/abort.o 00:03:40.990 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:40.990 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:40.990 LINK boot_partition 00:03:40.990 CC app/vhost/vhost.o 00:03:41.248 CXX test/cpp_headers/fsdev.o 00:03:41.248 LINK pmr_persistence 00:03:41.248 LINK cmb_copy 00:03:41.248 CC examples/bdev/bdevperf/bdevperf.o 00:03:41.248 CXX test/cpp_headers/fsdev_module.o 00:03:41.248 LINK vhost 00:03:41.248 LINK nvme_compliance 00:03:41.248 CXX test/cpp_headers/ftl.o 00:03:41.248 CC test/bdev/bdevio/bdevio.o 00:03:41.248 LINK abort 00:03:41.248 CC test/nvme/fused_ordering/fused_ordering.o 00:03:41.506 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:41.506 CC app/spdk_dd/spdk_dd.o 00:03:41.506 CXX test/cpp_headers/fuse_dispatcher.o 00:03:41.506 CXX test/cpp_headers/gpt_spec.o 00:03:41.506 CC test/nvme/fdp/fdp.o 00:03:41.506 CC test/nvme/cuse/cuse.o 00:03:41.506 LINK fused_ordering 00:03:41.506 LINK doorbell_aers 00:03:41.506 CXX test/cpp_headers/hexlify.o 00:03:41.506 CXX test/cpp_headers/histogram_data.o 00:03:41.764 LINK bdevio 00:03:41.764 CXX test/cpp_headers/idxd.o 00:03:41.764 LINK spdk_dd 00:03:41.764 CXX test/cpp_headers/idxd_spec.o 00:03:41.764 CXX test/cpp_headers/init.o 00:03:41.764 LINK fdp 00:03:41.764 CXX test/cpp_headers/ioat.o 00:03:41.764 CC app/fio/nvme/fio_plugin.o 00:03:41.764 CXX test/cpp_headers/ioat_spec.o 00:03:41.764 CXX test/cpp_headers/iscsi_spec.o 00:03:42.022 CXX test/cpp_headers/json.o 00:03:42.022 CXX test/cpp_headers/jsonrpc.o 00:03:42.022 CXX test/cpp_headers/keyring.o 00:03:42.022 CXX test/cpp_headers/keyring_module.o 00:03:42.022 CXX test/cpp_headers/likely.o 00:03:42.022 LINK bdevperf 00:03:42.022 CC app/fio/bdev/fio_plugin.o 00:03:42.022 CXX test/cpp_headers/log.o 00:03:42.022 CXX test/cpp_headers/lvol.o 00:03:42.022 CXX test/cpp_headers/md5.o 00:03:42.022 CXX test/cpp_headers/memory.o 00:03:42.022 CXX test/cpp_headers/mmio.o 00:03:42.280 CXX test/cpp_headers/nbd.o 00:03:42.280 CXX test/cpp_headers/net.o 00:03:42.280 CXX test/cpp_headers/notify.o 00:03:42.280 CXX test/cpp_headers/nvme.o 00:03:42.280 CXX test/cpp_headers/nvme_intel.o 00:03:42.280 CXX test/cpp_headers/nvme_ocssd.o 00:03:42.280 CC examples/nvmf/nvmf/nvmf.o 00:03:42.280 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:42.280 CXX test/cpp_headers/nvme_spec.o 00:03:42.280 CXX test/cpp_headers/nvme_zns.o 00:03:42.280 LINK spdk_nvme 00:03:42.280 CXX test/cpp_headers/nvmf_cmd.o 00:03:42.538 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:42.538 CXX test/cpp_headers/nvmf.o 00:03:42.538 LINK spdk_bdev 00:03:42.538 CXX test/cpp_headers/nvmf_spec.o 00:03:42.538 CXX test/cpp_headers/nvmf_transport.o 00:03:42.538 CXX test/cpp_headers/opal.o 00:03:42.538 CXX test/cpp_headers/opal_spec.o 00:03:42.538 CXX test/cpp_headers/pci_ids.o 00:03:42.538 LINK nvmf 00:03:42.538 CXX test/cpp_headers/pipe.o 00:03:42.538 CXX test/cpp_headers/queue.o 00:03:42.538 CXX test/cpp_headers/reduce.o 00:03:42.538 CXX test/cpp_headers/rpc.o 00:03:42.538 CXX test/cpp_headers/scheduler.o 00:03:42.796 CXX test/cpp_headers/scsi.o 00:03:42.796 CXX test/cpp_headers/scsi_spec.o 00:03:42.796 CXX test/cpp_headers/sock.o 00:03:42.796 LINK cuse 00:03:42.796 CXX test/cpp_headers/stdinc.o 00:03:42.796 CXX test/cpp_headers/string.o 00:03:42.796 CXX test/cpp_headers/thread.o 00:03:42.796 CXX test/cpp_headers/trace.o 00:03:42.796 CXX test/cpp_headers/trace_parser.o 00:03:42.796 CXX test/cpp_headers/tree.o 00:03:42.796 CXX test/cpp_headers/ublk.o 00:03:42.796 CXX test/cpp_headers/util.o 00:03:42.796 CXX test/cpp_headers/uuid.o 00:03:42.796 CXX test/cpp_headers/version.o 00:03:42.796 CXX test/cpp_headers/vfio_user_pci.o 00:03:42.796 CXX test/cpp_headers/vfio_user_spec.o 00:03:42.796 CXX test/cpp_headers/vhost.o 00:03:42.797 CXX test/cpp_headers/vmd.o 00:03:42.797 CXX test/cpp_headers/xor.o 00:03:42.797 CXX test/cpp_headers/zipf.o 00:03:45.359 LINK esnap 00:03:45.359 00:03:45.359 real 1m6.591s 00:03:45.359 user 6m9.024s 00:03:45.359 sys 1m6.645s 00:03:45.359 13:37:59 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:45.359 ************************************ 00:03:45.359 END TEST make 00:03:45.359 13:37:59 make -- common/autotest_common.sh@10 -- $ set +x 00:03:45.359 ************************************ 00:03:45.359 13:37:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:45.359 13:37:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:45.359 13:37:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:45.359 13:37:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.359 13:37:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:45.359 13:37:59 -- pm/common@44 -- $ pid=5074 00:03:45.359 13:37:59 -- pm/common@50 -- $ kill -TERM 5074 00:03:45.359 13:37:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.359 13:37:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:45.359 13:37:59 -- pm/common@44 -- $ pid=5075 00:03:45.359 13:37:59 -- pm/common@50 -- $ kill -TERM 5075 00:03:45.359 13:37:59 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:45.359 13:37:59 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:45.359 13:37:59 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:45.618 13:37:59 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:45.618 13:37:59 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:45.618 13:37:59 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:45.618 13:37:59 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:45.618 13:37:59 -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.618 13:37:59 -- scripts/common.sh@336 -- # read -ra ver1 00:03:45.618 13:37:59 -- scripts/common.sh@337 -- # IFS=.-: 00:03:45.618 13:37:59 -- scripts/common.sh@337 -- # read -ra ver2 00:03:45.618 13:37:59 -- scripts/common.sh@338 -- # local 'op=<' 00:03:45.618 13:37:59 -- scripts/common.sh@340 -- # ver1_l=2 00:03:45.618 13:37:59 -- scripts/common.sh@341 -- # ver2_l=1 00:03:45.618 13:37:59 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:45.618 13:37:59 -- scripts/common.sh@344 -- # case "$op" in 00:03:45.618 13:37:59 -- scripts/common.sh@345 -- # : 1 00:03:45.618 13:37:59 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:45.618 13:37:59 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.618 13:37:59 -- scripts/common.sh@365 -- # decimal 1 00:03:45.618 13:37:59 -- scripts/common.sh@353 -- # local d=1 00:03:45.618 13:37:59 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.618 13:37:59 -- scripts/common.sh@355 -- # echo 1 00:03:45.618 13:37:59 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:45.618 13:37:59 -- scripts/common.sh@366 -- # decimal 2 00:03:45.618 13:37:59 -- scripts/common.sh@353 -- # local d=2 00:03:45.618 13:37:59 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.618 13:37:59 -- scripts/common.sh@355 -- # echo 2 00:03:45.618 13:37:59 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:45.618 13:37:59 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:45.618 13:37:59 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:45.618 13:37:59 -- scripts/common.sh@368 -- # return 0 00:03:45.618 13:37:59 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.618 13:37:59 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:45.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.618 --rc genhtml_branch_coverage=1 00:03:45.618 --rc genhtml_function_coverage=1 00:03:45.618 --rc genhtml_legend=1 00:03:45.618 --rc geninfo_all_blocks=1 00:03:45.618 --rc geninfo_unexecuted_blocks=1 00:03:45.618 00:03:45.618 ' 00:03:45.618 13:37:59 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:45.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.618 --rc genhtml_branch_coverage=1 00:03:45.618 --rc genhtml_function_coverage=1 00:03:45.618 --rc genhtml_legend=1 00:03:45.618 --rc geninfo_all_blocks=1 00:03:45.618 --rc geninfo_unexecuted_blocks=1 00:03:45.618 00:03:45.618 ' 00:03:45.618 13:37:59 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:45.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.618 --rc genhtml_branch_coverage=1 00:03:45.618 --rc genhtml_function_coverage=1 00:03:45.618 --rc genhtml_legend=1 00:03:45.618 --rc geninfo_all_blocks=1 00:03:45.618 --rc geninfo_unexecuted_blocks=1 00:03:45.618 00:03:45.618 ' 00:03:45.618 13:37:59 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:45.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.618 --rc genhtml_branch_coverage=1 00:03:45.618 --rc genhtml_function_coverage=1 00:03:45.618 --rc genhtml_legend=1 00:03:45.618 --rc geninfo_all_blocks=1 00:03:45.618 --rc geninfo_unexecuted_blocks=1 00:03:45.618 00:03:45.618 ' 00:03:45.618 13:37:59 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:45.618 13:37:59 -- nvmf/common.sh@7 -- # uname -s 00:03:45.618 13:37:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:45.618 13:37:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:45.618 13:37:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:45.618 13:37:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:45.618 13:37:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:45.618 13:37:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:45.618 13:37:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:45.618 13:37:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:45.618 13:37:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:45.618 13:37:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:45.618 13:37:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cc7ab2e-d7fb-4e4a-87be-c6e45d97844f 00:03:45.618 13:37:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cc7ab2e-d7fb-4e4a-87be-c6e45d97844f 00:03:45.618 13:37:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:45.618 13:37:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:45.618 13:37:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:45.618 13:37:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:45.618 13:37:59 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:45.618 13:37:59 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:45.618 13:37:59 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:45.618 13:37:59 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.618 13:37:59 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.618 13:37:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.618 13:37:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.618 13:37:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.618 13:37:59 -- paths/export.sh@5 -- # export PATH 00:03:45.618 13:37:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.618 13:37:59 -- nvmf/common.sh@51 -- # : 0 00:03:45.618 13:37:59 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:45.618 13:37:59 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:45.618 13:37:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:45.618 13:37:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:45.618 13:37:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:45.618 13:37:59 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:45.618 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:45.618 13:37:59 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:45.618 13:37:59 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:45.618 13:37:59 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:45.618 13:37:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:45.618 13:37:59 -- spdk/autotest.sh@32 -- # uname -s 00:03:45.618 13:37:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:45.618 13:37:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:45.618 13:37:59 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.618 13:37:59 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:45.618 13:37:59 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.618 13:37:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:45.618 13:37:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:45.618 13:37:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:45.618 13:37:59 -- spdk/autotest.sh@48 -- # udevadm_pid=54240 00:03:45.618 13:37:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:45.618 13:37:59 -- pm/common@17 -- # local monitor 00:03:45.618 13:37:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.618 13:37:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:45.618 13:37:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.618 13:37:59 -- pm/common@25 -- # sleep 1 00:03:45.618 13:37:59 -- pm/common@21 -- # date +%s 00:03:45.618 13:37:59 -- pm/common@21 -- # date +%s 00:03:45.618 13:37:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728999479 00:03:45.618 13:37:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728999479 00:03:45.618 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728999479_collect-vmstat.pm.log 00:03:45.618 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728999479_collect-cpu-load.pm.log 00:03:46.553 13:38:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:46.553 13:38:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:46.553 13:38:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:46.553 13:38:00 -- common/autotest_common.sh@10 -- # set +x 00:03:46.553 13:38:00 -- spdk/autotest.sh@59 -- # create_test_list 00:03:46.553 13:38:00 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:46.553 13:38:00 -- common/autotest_common.sh@10 -- # set +x 00:03:46.553 13:38:00 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:46.553 13:38:00 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:46.553 13:38:00 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:46.553 13:38:00 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:46.553 13:38:00 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:46.553 13:38:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:46.553 13:38:00 -- common/autotest_common.sh@1455 -- # uname 00:03:46.553 13:38:00 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:46.553 13:38:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:46.553 13:38:00 -- common/autotest_common.sh@1475 -- # uname 00:03:46.553 13:38:00 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:46.553 13:38:00 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:46.553 13:38:00 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:46.811 lcov: LCOV version 1.15 00:03:46.811 13:38:00 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:01.688 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:01.688 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:13.883 13:38:27 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:13.883 13:38:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:13.883 13:38:27 -- common/autotest_common.sh@10 -- # set +x 00:04:13.883 13:38:27 -- spdk/autotest.sh@78 -- # rm -f 00:04:13.883 13:38:27 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.713 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:14.713 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:14.713 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:14.713 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:14.713 13:38:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:14.713 13:38:28 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:14.713 13:38:28 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:14.713 13:38:28 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:14.713 13:38:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.713 13:38:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:14.713 13:38:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:14.713 13:38:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.713 13:38:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.713 13:38:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.713 13:38:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:14.713 13:38:28 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:14.713 13:38:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:14.713 13:38:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.713 13:38:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.713 13:38:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:14.713 13:38:28 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:14.713 13:38:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:14.713 13:38:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.713 13:38:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.713 13:38:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:14.713 13:38:28 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:14.713 13:38:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:14.713 13:38:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.713 13:38:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.713 13:38:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:14.713 13:38:28 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:14.713 13:38:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:14.713 13:38:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.713 13:38:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.713 13:38:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:14.713 13:38:28 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:14.713 13:38:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:14.713 13:38:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.713 13:38:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.713 13:38:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:14.713 13:38:28 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:14.713 13:38:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:14.713 13:38:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.713 13:38:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:14.713 13:38:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.713 13:38:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.713 13:38:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:14.713 13:38:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:14.713 13:38:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:14.713 No valid GPT data, bailing 00:04:14.713 13:38:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:14.713 13:38:28 -- scripts/common.sh@394 -- # pt= 00:04:14.713 13:38:28 -- scripts/common.sh@395 -- # return 1 00:04:14.713 13:38:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:14.713 1+0 records in 00:04:14.713 1+0 records out 00:04:14.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117685 s, 89.1 MB/s 00:04:14.713 13:38:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.713 13:38:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.713 13:38:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:14.713 13:38:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:14.713 13:38:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:14.713 No valid GPT data, bailing 00:04:14.713 13:38:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:14.713 13:38:28 -- scripts/common.sh@394 -- # pt= 00:04:14.713 13:38:28 -- scripts/common.sh@395 -- # return 1 00:04:14.713 13:38:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:14.713 1+0 records in 00:04:14.713 1+0 records out 00:04:14.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476694 s, 220 MB/s 00:04:14.713 13:38:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.713 13:38:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.713 13:38:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:14.713 13:38:28 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:14.713 13:38:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:14.974 No valid GPT data, bailing 00:04:14.975 13:38:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:14.975 13:38:28 -- scripts/common.sh@394 -- # pt= 00:04:14.975 13:38:28 -- scripts/common.sh@395 -- # return 1 00:04:14.975 13:38:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:14.975 1+0 records in 00:04:14.975 1+0 records out 00:04:14.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00484151 s, 217 MB/s 00:04:14.975 13:38:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.975 13:38:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.975 13:38:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:14.975 13:38:28 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:14.975 13:38:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:14.975 No valid GPT data, bailing 00:04:14.975 13:38:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:14.975 13:38:28 -- scripts/common.sh@394 -- # pt= 00:04:14.975 13:38:28 -- scripts/common.sh@395 -- # return 1 00:04:14.975 13:38:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:14.975 1+0 records in 00:04:14.975 1+0 records out 00:04:14.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00398684 s, 263 MB/s 00:04:14.975 13:38:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.975 13:38:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.975 13:38:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:14.975 13:38:28 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:14.975 13:38:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:14.975 No valid GPT data, bailing 00:04:14.975 13:38:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:14.975 13:38:28 -- scripts/common.sh@394 -- # pt= 00:04:14.975 13:38:28 -- scripts/common.sh@395 -- # return 1 00:04:14.975 13:38:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:14.975 1+0 records in 00:04:14.975 1+0 records out 00:04:14.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377701 s, 278 MB/s 00:04:14.975 13:38:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.975 13:38:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.975 13:38:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:14.975 13:38:28 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:14.975 13:38:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:14.975 No valid GPT data, bailing 00:04:14.975 13:38:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:14.975 13:38:28 -- scripts/common.sh@394 -- # pt= 00:04:14.975 13:38:28 -- scripts/common.sh@395 -- # return 1 00:04:14.975 13:38:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:14.975 1+0 records in 00:04:14.975 1+0 records out 00:04:14.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041488 s, 253 MB/s 00:04:14.975 13:38:28 -- spdk/autotest.sh@105 -- # sync 00:04:14.975 13:38:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:14.975 13:38:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:14.975 13:38:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:16.886 13:38:30 -- spdk/autotest.sh@111 -- # uname -s 00:04:16.886 13:38:30 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:16.886 13:38:30 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:16.886 13:38:30 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:17.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.404 Hugepages 00:04:17.404 node hugesize free / total 00:04:17.404 node0 1048576kB 0 / 0 00:04:17.404 node0 2048kB 0 / 0 00:04:17.404 00:04:17.404 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:17.404 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:17.666 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:17.666 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:17.666 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:17.666 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:17.666 13:38:31 -- spdk/autotest.sh@117 -- # uname -s 00:04:17.666 13:38:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:17.666 13:38:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:17.666 13:38:31 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.810 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.810 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.810 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.810 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.810 13:38:32 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:19.753 13:38:33 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:19.753 13:38:33 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:19.753 13:38:33 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:19.753 13:38:33 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:19.753 13:38:33 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:19.753 13:38:33 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:19.753 13:38:33 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.753 13:38:33 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:19.753 13:38:33 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:20.013 13:38:33 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:20.013 13:38:33 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:20.013 13:38:33 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.273 Waiting for block devices as requested 00:04:20.533 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:20.533 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:20.533 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:20.533 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:25.825 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:25.825 13:38:39 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:25.825 13:38:39 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:25.825 13:38:39 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:25.825 13:38:39 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:25.825 13:38:39 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:25.825 13:38:39 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:25.825 13:38:39 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:25.825 13:38:39 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:25.825 13:38:39 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:25.825 13:38:39 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:25.825 13:38:39 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:25.825 13:38:39 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1541 -- # continue 00:04:25.825 13:38:39 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:25.825 13:38:39 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:25.825 13:38:39 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:25.825 13:38:39 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:25.825 13:38:39 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:25.825 13:38:39 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:25.825 13:38:39 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:25.825 13:38:39 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:25.825 13:38:39 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:25.825 13:38:39 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:25.825 13:38:39 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:25.825 13:38:39 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1541 -- # continue 00:04:25.825 13:38:39 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:25.825 13:38:39 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:25.825 13:38:39 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:25.825 13:38:39 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:25.825 13:38:39 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:25.825 13:38:39 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:25.825 13:38:39 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:25.825 13:38:39 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1541 -- # continue 00:04:25.825 13:38:39 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:25.825 13:38:39 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:25.825 13:38:39 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:25.825 13:38:39 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:25.825 13:38:39 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:25.825 13:38:39 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:25.825 13:38:39 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:25.825 13:38:39 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:25.825 13:38:39 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:25.825 13:38:39 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:25.825 13:38:39 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:25.825 13:38:39 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:25.825 13:38:39 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:25.825 13:38:39 -- common/autotest_common.sh@1541 -- # continue 00:04:25.825 13:38:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:25.825 13:38:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:25.825 13:38:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.825 13:38:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:25.825 13:38:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.825 13:38:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.825 13:38:39 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.972 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.972 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.972 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.972 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.972 13:38:40 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:26.972 13:38:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.972 13:38:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.972 13:38:40 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:26.972 13:38:40 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:26.972 13:38:40 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:26.972 13:38:40 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:26.972 13:38:40 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:26.972 13:38:40 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:26.972 13:38:40 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:26.972 13:38:40 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:26.972 13:38:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:26.972 13:38:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:26.972 13:38:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:26.972 13:38:40 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:26.972 13:38:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:27.233 13:38:40 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:27.233 13:38:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:27.233 13:38:40 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:27.233 13:38:40 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:27.233 13:38:40 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:27.233 13:38:40 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:27.233 13:38:40 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:27.233 13:38:40 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:27.233 13:38:40 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:27.233 13:38:40 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:27.233 13:38:40 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:27.233 13:38:40 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:27.233 13:38:40 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:27.233 13:38:40 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:27.233 13:38:40 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:27.233 13:38:40 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:27.233 13:38:40 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:27.233 13:38:40 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:27.233 13:38:40 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:27.233 13:38:40 -- common/autotest_common.sh@1570 -- # return 0 00:04:27.233 13:38:40 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:27.233 13:38:40 -- common/autotest_common.sh@1578 -- # return 0 00:04:27.233 13:38:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:27.233 13:38:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:27.233 13:38:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:27.233 13:38:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:27.233 13:38:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:27.233 13:38:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:27.233 13:38:40 -- common/autotest_common.sh@10 -- # set +x 00:04:27.233 13:38:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:27.233 13:38:40 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:27.233 13:38:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.233 13:38:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.233 13:38:40 -- common/autotest_common.sh@10 -- # set +x 00:04:27.233 ************************************ 00:04:27.233 START TEST env 00:04:27.233 ************************************ 00:04:27.233 13:38:40 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:27.233 * Looking for test storage... 00:04:27.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:27.233 13:38:40 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:27.233 13:38:40 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:27.233 13:38:40 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:27.233 13:38:41 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:27.233 13:38:41 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.233 13:38:41 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.233 13:38:41 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.233 13:38:41 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.233 13:38:41 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.233 13:38:41 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.233 13:38:41 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.233 13:38:41 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.233 13:38:41 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.233 13:38:41 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.233 13:38:41 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.233 13:38:41 env -- scripts/common.sh@344 -- # case "$op" in 00:04:27.233 13:38:41 env -- scripts/common.sh@345 -- # : 1 00:04:27.233 13:38:41 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.233 13:38:41 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.233 13:38:41 env -- scripts/common.sh@365 -- # decimal 1 00:04:27.233 13:38:41 env -- scripts/common.sh@353 -- # local d=1 00:04:27.233 13:38:41 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.233 13:38:41 env -- scripts/common.sh@355 -- # echo 1 00:04:27.233 13:38:41 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.233 13:38:41 env -- scripts/common.sh@366 -- # decimal 2 00:04:27.233 13:38:41 env -- scripts/common.sh@353 -- # local d=2 00:04:27.233 13:38:41 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.233 13:38:41 env -- scripts/common.sh@355 -- # echo 2 00:04:27.495 13:38:41 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.495 13:38:41 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.495 13:38:41 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.495 13:38:41 env -- scripts/common.sh@368 -- # return 0 00:04:27.495 13:38:41 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.495 13:38:41 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:27.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.495 --rc genhtml_branch_coverage=1 00:04:27.495 --rc genhtml_function_coverage=1 00:04:27.495 --rc genhtml_legend=1 00:04:27.495 --rc geninfo_all_blocks=1 00:04:27.495 --rc geninfo_unexecuted_blocks=1 00:04:27.495 00:04:27.495 ' 00:04:27.495 13:38:41 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:27.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.495 --rc genhtml_branch_coverage=1 00:04:27.495 --rc genhtml_function_coverage=1 00:04:27.495 --rc genhtml_legend=1 00:04:27.495 --rc geninfo_all_blocks=1 00:04:27.495 --rc geninfo_unexecuted_blocks=1 00:04:27.495 00:04:27.495 ' 00:04:27.495 13:38:41 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:27.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.495 --rc genhtml_branch_coverage=1 00:04:27.495 --rc genhtml_function_coverage=1 00:04:27.495 --rc genhtml_legend=1 00:04:27.495 --rc geninfo_all_blocks=1 00:04:27.495 --rc geninfo_unexecuted_blocks=1 00:04:27.495 00:04:27.495 ' 00:04:27.495 13:38:41 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:27.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.495 --rc genhtml_branch_coverage=1 00:04:27.495 --rc genhtml_function_coverage=1 00:04:27.495 --rc genhtml_legend=1 00:04:27.495 --rc geninfo_all_blocks=1 00:04:27.495 --rc geninfo_unexecuted_blocks=1 00:04:27.495 00:04:27.495 ' 00:04:27.495 13:38:41 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:27.495 13:38:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.495 13:38:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.495 13:38:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.495 ************************************ 00:04:27.495 START TEST env_memory 00:04:27.495 ************************************ 00:04:27.495 13:38:41 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:27.495 00:04:27.495 00:04:27.495 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.495 http://cunit.sourceforge.net/ 00:04:27.495 00:04:27.495 00:04:27.495 Suite: memory 00:04:27.495 Test: alloc and free memory map ...[2024-10-15 13:38:41.093560] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:27.495 passed 00:04:27.495 Test: mem map translation ...[2024-10-15 13:38:41.133295] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:27.495 [2024-10-15 13:38:41.133441] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:27.495 [2024-10-15 13:38:41.133558] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:27.495 [2024-10-15 13:38:41.133598] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:27.495 passed 00:04:27.495 Test: mem map registration ...[2024-10-15 13:38:41.202314] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:27.495 [2024-10-15 13:38:41.202453] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:27.495 passed 00:04:27.757 Test: mem map adjacent registrations ...passed 00:04:27.757 00:04:27.757 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.757 suites 1 1 n/a 0 0 00:04:27.757 tests 4 4 4 0 0 00:04:27.757 asserts 152 152 152 0 n/a 00:04:27.757 00:04:27.757 Elapsed time = 0.237 seconds 00:04:27.757 00:04:27.757 real 0m0.276s 00:04:27.757 user 0m0.243s 00:04:27.757 sys 0m0.023s 00:04:27.757 13:38:41 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.757 13:38:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:27.757 ************************************ 00:04:27.757 END TEST env_memory 00:04:27.757 ************************************ 00:04:27.757 13:38:41 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:27.757 13:38:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.757 13:38:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.757 13:38:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.757 ************************************ 00:04:27.757 START TEST env_vtophys 00:04:27.757 ************************************ 00:04:27.757 13:38:41 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:27.757 EAL: lib.eal log level changed from notice to debug 00:04:27.757 EAL: Detected lcore 0 as core 0 on socket 0 00:04:27.757 EAL: Detected lcore 1 as core 0 on socket 0 00:04:27.757 EAL: Detected lcore 2 as core 0 on socket 0 00:04:27.757 EAL: Detected lcore 3 as core 0 on socket 0 00:04:27.757 EAL: Detected lcore 4 as core 0 on socket 0 00:04:27.757 EAL: Detected lcore 5 as core 0 on socket 0 00:04:27.757 EAL: Detected lcore 6 as core 0 on socket 0 00:04:27.757 EAL: Detected lcore 7 as core 0 on socket 0 00:04:27.757 EAL: Detected lcore 8 as core 0 on socket 0 00:04:27.757 EAL: Detected lcore 9 as core 0 on socket 0 00:04:27.757 EAL: Maximum logical cores by configuration: 128 00:04:27.757 EAL: Detected CPU lcores: 10 00:04:27.757 EAL: Detected NUMA nodes: 1 00:04:27.757 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:27.757 EAL: Detected shared linkage of DPDK 00:04:27.757 EAL: No shared files mode enabled, IPC will be disabled 00:04:27.758 EAL: Selected IOVA mode 'PA' 00:04:27.758 EAL: Probing VFIO support... 00:04:27.758 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:27.758 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:27.758 EAL: Ask a virtual area of 0x2e000 bytes 00:04:27.758 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:27.758 EAL: Setting up physically contiguous memory... 00:04:27.758 EAL: Setting maximum number of open files to 524288 00:04:27.758 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:27.758 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:27.758 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.758 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:27.758 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.758 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.758 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:27.758 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:27.758 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.758 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:27.758 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.758 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.758 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:27.758 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:27.758 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.758 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:27.758 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.758 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.758 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:27.758 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:27.758 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.758 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:27.758 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.758 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.758 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:27.758 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:27.758 EAL: Hugepages will be freed exactly as allocated. 00:04:27.758 EAL: No shared files mode enabled, IPC is disabled 00:04:27.758 EAL: No shared files mode enabled, IPC is disabled 00:04:28.020 EAL: TSC frequency is ~2600000 KHz 00:04:28.020 EAL: Main lcore 0 is ready (tid=7fc6fbcd3a40;cpuset=[0]) 00:04:28.020 EAL: Trying to obtain current memory policy. 00:04:28.020 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.020 EAL: Restoring previous memory policy: 0 00:04:28.020 EAL: request: mp_malloc_sync 00:04:28.020 EAL: No shared files mode enabled, IPC is disabled 00:04:28.020 EAL: Heap on socket 0 was expanded by 2MB 00:04:28.020 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:28.020 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:28.020 EAL: Mem event callback 'spdk:(nil)' registered 00:04:28.020 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:28.020 00:04:28.020 00:04:28.020 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.020 http://cunit.sourceforge.net/ 00:04:28.020 00:04:28.020 00:04:28.020 Suite: components_suite 00:04:28.280 Test: vtophys_malloc_test ...passed 00:04:28.281 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:28.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.281 EAL: Restoring previous memory policy: 4 00:04:28.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.281 EAL: request: mp_malloc_sync 00:04:28.281 EAL: No shared files mode enabled, IPC is disabled 00:04:28.281 EAL: Heap on socket 0 was expanded by 4MB 00:04:28.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.281 EAL: request: mp_malloc_sync 00:04:28.281 EAL: No shared files mode enabled, IPC is disabled 00:04:28.281 EAL: Heap on socket 0 was shrunk by 4MB 00:04:28.281 EAL: Trying to obtain current memory policy. 00:04:28.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.281 EAL: Restoring previous memory policy: 4 00:04:28.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.281 EAL: request: mp_malloc_sync 00:04:28.281 EAL: No shared files mode enabled, IPC is disabled 00:04:28.281 EAL: Heap on socket 0 was expanded by 6MB 00:04:28.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.281 EAL: request: mp_malloc_sync 00:04:28.281 EAL: No shared files mode enabled, IPC is disabled 00:04:28.281 EAL: Heap on socket 0 was shrunk by 6MB 00:04:28.281 EAL: Trying to obtain current memory policy. 00:04:28.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.281 EAL: Restoring previous memory policy: 4 00:04:28.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.281 EAL: request: mp_malloc_sync 00:04:28.281 EAL: No shared files mode enabled, IPC is disabled 00:04:28.281 EAL: Heap on socket 0 was expanded by 10MB 00:04:28.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.281 EAL: request: mp_malloc_sync 00:04:28.281 EAL: No shared files mode enabled, IPC is disabled 00:04:28.281 EAL: Heap on socket 0 was shrunk by 10MB 00:04:28.281 EAL: Trying to obtain current memory policy. 00:04:28.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.281 EAL: Restoring previous memory policy: 4 00:04:28.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.281 EAL: request: mp_malloc_sync 00:04:28.281 EAL: No shared files mode enabled, IPC is disabled 00:04:28.281 EAL: Heap on socket 0 was expanded by 18MB 00:04:28.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.281 EAL: request: mp_malloc_sync 00:04:28.281 EAL: No shared files mode enabled, IPC is disabled 00:04:28.281 EAL: Heap on socket 0 was shrunk by 18MB 00:04:28.281 EAL: Trying to obtain current memory policy. 00:04:28.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.281 EAL: Restoring previous memory policy: 4 00:04:28.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.281 EAL: request: mp_malloc_sync 00:04:28.281 EAL: No shared files mode enabled, IPC is disabled 00:04:28.281 EAL: Heap on socket 0 was expanded by 34MB 00:04:28.542 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.542 EAL: request: mp_malloc_sync 00:04:28.542 EAL: No shared files mode enabled, IPC is disabled 00:04:28.542 EAL: Heap on socket 0 was shrunk by 34MB 00:04:28.542 EAL: Trying to obtain current memory policy. 00:04:28.542 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.542 EAL: Restoring previous memory policy: 4 00:04:28.542 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.542 EAL: request: mp_malloc_sync 00:04:28.542 EAL: No shared files mode enabled, IPC is disabled 00:04:28.542 EAL: Heap on socket 0 was expanded by 66MB 00:04:28.542 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.542 EAL: request: mp_malloc_sync 00:04:28.542 EAL: No shared files mode enabled, IPC is disabled 00:04:28.542 EAL: Heap on socket 0 was shrunk by 66MB 00:04:28.542 EAL: Trying to obtain current memory policy. 00:04:28.542 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.542 EAL: Restoring previous memory policy: 4 00:04:28.542 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.542 EAL: request: mp_malloc_sync 00:04:28.542 EAL: No shared files mode enabled, IPC is disabled 00:04:28.542 EAL: Heap on socket 0 was expanded by 130MB 00:04:28.804 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.804 EAL: request: mp_malloc_sync 00:04:28.804 EAL: No shared files mode enabled, IPC is disabled 00:04:28.804 EAL: Heap on socket 0 was shrunk by 130MB 00:04:29.064 EAL: Trying to obtain current memory policy. 00:04:29.064 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.064 EAL: Restoring previous memory policy: 4 00:04:29.064 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.064 EAL: request: mp_malloc_sync 00:04:29.064 EAL: No shared files mode enabled, IPC is disabled 00:04:29.064 EAL: Heap on socket 0 was expanded by 258MB 00:04:29.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.325 EAL: request: mp_malloc_sync 00:04:29.325 EAL: No shared files mode enabled, IPC is disabled 00:04:29.325 EAL: Heap on socket 0 was shrunk by 258MB 00:04:29.587 EAL: Trying to obtain current memory policy. 00:04:29.587 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.587 EAL: Restoring previous memory policy: 4 00:04:29.587 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.587 EAL: request: mp_malloc_sync 00:04:29.587 EAL: No shared files mode enabled, IPC is disabled 00:04:29.587 EAL: Heap on socket 0 was expanded by 514MB 00:04:30.531 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.531 EAL: request: mp_malloc_sync 00:04:30.531 EAL: No shared files mode enabled, IPC is disabled 00:04:30.531 EAL: Heap on socket 0 was shrunk by 514MB 00:04:30.792 EAL: Trying to obtain current memory policy. 00:04:30.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.054 EAL: Restoring previous memory policy: 4 00:04:31.054 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.054 EAL: request: mp_malloc_sync 00:04:31.054 EAL: No shared files mode enabled, IPC is disabled 00:04:31.054 EAL: Heap on socket 0 was expanded by 1026MB 00:04:32.442 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.442 EAL: request: mp_malloc_sync 00:04:32.442 EAL: No shared files mode enabled, IPC is disabled 00:04:32.442 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:33.387 passed 00:04:33.387 00:04:33.387 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.387 suites 1 1 n/a 0 0 00:04:33.387 tests 2 2 2 0 0 00:04:33.387 asserts 5810 5810 5810 0 n/a 00:04:33.387 00:04:33.387 Elapsed time = 5.189 seconds 00:04:33.387 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.387 EAL: request: mp_malloc_sync 00:04:33.387 EAL: No shared files mode enabled, IPC is disabled 00:04:33.387 EAL: Heap on socket 0 was shrunk by 2MB 00:04:33.387 EAL: No shared files mode enabled, IPC is disabled 00:04:33.387 EAL: No shared files mode enabled, IPC is disabled 00:04:33.387 EAL: No shared files mode enabled, IPC is disabled 00:04:33.387 00:04:33.387 real 0m5.469s 00:04:33.387 user 0m4.468s 00:04:33.387 sys 0m0.835s 00:04:33.387 13:38:46 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.387 13:38:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:33.387 ************************************ 00:04:33.387 END TEST env_vtophys 00:04:33.387 ************************************ 00:04:33.387 13:38:46 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:33.387 13:38:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.387 13:38:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.387 13:38:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.387 ************************************ 00:04:33.387 START TEST env_pci 00:04:33.387 ************************************ 00:04:33.387 13:38:46 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:33.387 00:04:33.387 00:04:33.387 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.387 http://cunit.sourceforge.net/ 00:04:33.387 00:04:33.387 00:04:33.387 Suite: pci 00:04:33.387 Test: pci_hook ...[2024-10-15 13:38:46.940188] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57001 has claimed it 00:04:33.387 EAL: Cannot find device (10000:00:01.0) 00:04:33.387 passed 00:04:33.387 00:04:33.387 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.387 suites 1 1 n/a 0 0 00:04:33.387 tests 1 1 1 0 0 00:04:33.387 asserts 25 25 25 0 n/a 00:04:33.387 00:04:33.387 Elapsed time = 0.004 seconds 00:04:33.387 EAL: Failed to attach device on primary process 00:04:33.387 00:04:33.387 real 0m0.058s 00:04:33.387 user 0m0.022s 00:04:33.387 sys 0m0.034s 00:04:33.387 13:38:46 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.387 13:38:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:33.387 ************************************ 00:04:33.387 END TEST env_pci 00:04:33.387 ************************************ 00:04:33.387 13:38:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:33.387 13:38:47 env -- env/env.sh@15 -- # uname 00:04:33.387 13:38:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:33.387 13:38:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:33.387 13:38:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.387 13:38:47 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:33.387 13:38:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.387 13:38:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.387 ************************************ 00:04:33.387 START TEST env_dpdk_post_init 00:04:33.387 ************************************ 00:04:33.387 13:38:47 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.387 EAL: Detected CPU lcores: 10 00:04:33.387 EAL: Detected NUMA nodes: 1 00:04:33.387 EAL: Detected shared linkage of DPDK 00:04:33.387 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.387 EAL: Selected IOVA mode 'PA' 00:04:33.648 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.648 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:33.648 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:33.648 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:33.648 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:33.648 Starting DPDK initialization... 00:04:33.648 Starting SPDK post initialization... 00:04:33.648 SPDK NVMe probe 00:04:33.648 Attaching to 0000:00:10.0 00:04:33.648 Attaching to 0000:00:11.0 00:04:33.648 Attaching to 0000:00:12.0 00:04:33.648 Attaching to 0000:00:13.0 00:04:33.648 Attached to 0000:00:13.0 00:04:33.648 Attached to 0000:00:10.0 00:04:33.648 Attached to 0000:00:11.0 00:04:33.648 Attached to 0000:00:12.0 00:04:33.648 Cleaning up... 00:04:33.648 00:04:33.648 real 0m0.252s 00:04:33.648 user 0m0.076s 00:04:33.648 sys 0m0.075s 00:04:33.648 13:38:47 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.648 ************************************ 00:04:33.648 END TEST env_dpdk_post_init 00:04:33.648 13:38:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:33.648 ************************************ 00:04:33.648 13:38:47 env -- env/env.sh@26 -- # uname 00:04:33.648 13:38:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:33.648 13:38:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.648 13:38:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.648 13:38:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.648 13:38:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.648 ************************************ 00:04:33.648 START TEST env_mem_callbacks 00:04:33.648 ************************************ 00:04:33.648 13:38:47 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.648 EAL: Detected CPU lcores: 10 00:04:33.648 EAL: Detected NUMA nodes: 1 00:04:33.648 EAL: Detected shared linkage of DPDK 00:04:33.648 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.648 EAL: Selected IOVA mode 'PA' 00:04:33.910 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.910 00:04:33.910 00:04:33.910 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.910 http://cunit.sourceforge.net/ 00:04:33.910 00:04:33.910 00:04:33.910 Suite: memory 00:04:33.910 Test: test ... 00:04:33.910 register 0x200000200000 2097152 00:04:33.910 malloc 3145728 00:04:33.910 register 0x200000400000 4194304 00:04:33.910 buf 0x2000004fffc0 len 3145728 PASSED 00:04:33.910 malloc 64 00:04:33.910 buf 0x2000004ffec0 len 64 PASSED 00:04:33.910 malloc 4194304 00:04:33.910 register 0x200000800000 6291456 00:04:33.910 buf 0x2000009fffc0 len 4194304 PASSED 00:04:33.910 free 0x2000004fffc0 3145728 00:04:33.910 free 0x2000004ffec0 64 00:04:33.910 unregister 0x200000400000 4194304 PASSED 00:04:33.910 free 0x2000009fffc0 4194304 00:04:33.910 unregister 0x200000800000 6291456 PASSED 00:04:33.910 malloc 8388608 00:04:33.910 register 0x200000400000 10485760 00:04:33.910 buf 0x2000005fffc0 len 8388608 PASSED 00:04:33.910 free 0x2000005fffc0 8388608 00:04:33.910 unregister 0x200000400000 10485760 PASSED 00:04:33.910 passed 00:04:33.910 00:04:33.910 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.910 suites 1 1 n/a 0 0 00:04:33.910 tests 1 1 1 0 0 00:04:33.910 asserts 15 15 15 0 n/a 00:04:33.910 00:04:33.910 Elapsed time = 0.051 seconds 00:04:33.910 00:04:33.910 real 0m0.229s 00:04:33.910 user 0m0.070s 00:04:33.910 sys 0m0.054s 00:04:33.910 13:38:47 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.910 13:38:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:33.910 ************************************ 00:04:33.910 END TEST env_mem_callbacks 00:04:33.910 ************************************ 00:04:33.910 00:04:33.910 real 0m6.767s 00:04:33.910 user 0m5.058s 00:04:33.910 sys 0m1.219s 00:04:33.910 13:38:47 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.910 ************************************ 00:04:33.910 END TEST env 00:04:33.910 ************************************ 00:04:33.910 13:38:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.910 13:38:47 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:33.910 13:38:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.910 13:38:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.910 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.910 ************************************ 00:04:33.910 START TEST rpc 00:04:33.910 ************************************ 00:04:33.910 13:38:47 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:34.172 * Looking for test storage... 00:04:34.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:34.172 13:38:47 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.172 13:38:47 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.172 13:38:47 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.172 13:38:47 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.172 13:38:47 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.172 13:38:47 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.172 13:38:47 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.172 13:38:47 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.172 13:38:47 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.172 13:38:47 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.172 13:38:47 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.172 13:38:47 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.172 13:38:47 rpc -- scripts/common.sh@345 -- # : 1 00:04:34.172 13:38:47 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.172 13:38:47 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.172 13:38:47 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.172 13:38:47 rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.172 13:38:47 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.172 13:38:47 rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.172 13:38:47 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.172 13:38:47 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.172 13:38:47 rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.172 13:38:47 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.172 13:38:47 rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.172 13:38:47 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.172 13:38:47 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.172 13:38:47 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.172 13:38:47 rpc -- scripts/common.sh@368 -- # return 0 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:34.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.172 --rc genhtml_branch_coverage=1 00:04:34.172 --rc genhtml_function_coverage=1 00:04:34.172 --rc genhtml_legend=1 00:04:34.172 --rc geninfo_all_blocks=1 00:04:34.172 --rc geninfo_unexecuted_blocks=1 00:04:34.172 00:04:34.172 ' 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:34.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.172 --rc genhtml_branch_coverage=1 00:04:34.172 --rc genhtml_function_coverage=1 00:04:34.172 --rc genhtml_legend=1 00:04:34.172 --rc geninfo_all_blocks=1 00:04:34.172 --rc geninfo_unexecuted_blocks=1 00:04:34.172 00:04:34.172 ' 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:34.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.172 --rc genhtml_branch_coverage=1 00:04:34.172 --rc genhtml_function_coverage=1 00:04:34.172 --rc genhtml_legend=1 00:04:34.172 --rc geninfo_all_blocks=1 00:04:34.172 --rc geninfo_unexecuted_blocks=1 00:04:34.172 00:04:34.172 ' 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:34.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.172 --rc genhtml_branch_coverage=1 00:04:34.172 --rc genhtml_function_coverage=1 00:04:34.172 --rc genhtml_legend=1 00:04:34.172 --rc geninfo_all_blocks=1 00:04:34.172 --rc geninfo_unexecuted_blocks=1 00:04:34.172 00:04:34.172 ' 00:04:34.172 13:38:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57128 00:04:34.172 13:38:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.172 13:38:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57128 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@831 -- # '[' -z 57128 ']' 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.172 13:38:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.172 13:38:47 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:34.172 [2024-10-15 13:38:47.927995] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:04:34.172 [2024-10-15 13:38:47.928473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57128 ] 00:04:34.433 [2024-10-15 13:38:48.085307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.433 [2024-10-15 13:38:48.206896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:34.433 [2024-10-15 13:38:48.207149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57128' to capture a snapshot of events at runtime. 00:04:34.433 [2024-10-15 13:38:48.207271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:34.433 [2024-10-15 13:38:48.207307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:34.433 [2024-10-15 13:38:48.207327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57128 for offline analysis/debug. 00:04:34.433 [2024-10-15 13:38:48.208372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.373 13:38:48 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.373 13:38:48 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:35.373 13:38:48 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.373 13:38:48 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.373 13:38:48 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:35.373 13:38:48 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:35.373 13:38:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.373 13:38:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.373 13:38:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.373 ************************************ 00:04:35.373 START TEST rpc_integrity 00:04:35.373 ************************************ 00:04:35.373 13:38:48 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:35.373 13:38:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.373 13:38:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.373 13:38:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.373 13:38:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.373 13:38:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.373 13:38:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.373 13:38:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.373 13:38:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.373 13:38:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.373 13:38:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.373 13:38:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.373 13:38:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:35.373 13:38:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.373 13:38:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.373 13:38:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.373 13:38:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.373 13:38:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.373 { 00:04:35.373 "name": "Malloc0", 00:04:35.373 "aliases": [ 00:04:35.373 "63b42b09-cff8-47c1-9c8c-4e5719590109" 00:04:35.373 ], 00:04:35.373 "product_name": "Malloc disk", 00:04:35.373 "block_size": 512, 00:04:35.373 "num_blocks": 16384, 00:04:35.373 "uuid": "63b42b09-cff8-47c1-9c8c-4e5719590109", 00:04:35.373 "assigned_rate_limits": { 00:04:35.373 "rw_ios_per_sec": 0, 00:04:35.373 "rw_mbytes_per_sec": 0, 00:04:35.373 "r_mbytes_per_sec": 0, 00:04:35.373 "w_mbytes_per_sec": 0 00:04:35.373 }, 00:04:35.373 "claimed": false, 00:04:35.373 "zoned": false, 00:04:35.373 "supported_io_types": { 00:04:35.373 "read": true, 00:04:35.373 "write": true, 00:04:35.373 "unmap": true, 00:04:35.373 "flush": true, 00:04:35.373 "reset": true, 00:04:35.373 "nvme_admin": false, 00:04:35.373 "nvme_io": false, 00:04:35.373 "nvme_io_md": false, 00:04:35.373 "write_zeroes": true, 00:04:35.373 "zcopy": true, 00:04:35.373 "get_zone_info": false, 00:04:35.373 "zone_management": false, 00:04:35.373 "zone_append": false, 00:04:35.373 "compare": false, 00:04:35.373 "compare_and_write": false, 00:04:35.373 "abort": true, 00:04:35.373 "seek_hole": false, 00:04:35.373 "seek_data": false, 00:04:35.373 "copy": true, 00:04:35.373 "nvme_iov_md": false 00:04:35.373 }, 00:04:35.373 "memory_domains": [ 00:04:35.373 { 00:04:35.373 "dma_device_id": "system", 00:04:35.373 "dma_device_type": 1 00:04:35.373 }, 00:04:35.373 { 00:04:35.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.373 "dma_device_type": 2 00:04:35.373 } 00:04:35.373 ], 00:04:35.373 "driver_specific": {} 00:04:35.373 } 00:04:35.373 ]' 00:04:35.373 13:38:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.373 13:38:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.373 13:38:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:35.373 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.373 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.373 [2024-10-15 13:38:49.035116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:35.373 [2024-10-15 13:38:49.035365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.373 [2024-10-15 13:38:49.035409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:35.373 [2024-10-15 13:38:49.035423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.373 [2024-10-15 13:38:49.038025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.373 [2024-10-15 13:38:49.038085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.373 Passthru0 00:04:35.373 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.373 13:38:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.373 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.373 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.373 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.373 13:38:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.373 { 00:04:35.373 "name": "Malloc0", 00:04:35.373 "aliases": [ 00:04:35.373 "63b42b09-cff8-47c1-9c8c-4e5719590109" 00:04:35.373 ], 00:04:35.373 "product_name": "Malloc disk", 00:04:35.373 "block_size": 512, 00:04:35.373 "num_blocks": 16384, 00:04:35.373 "uuid": "63b42b09-cff8-47c1-9c8c-4e5719590109", 00:04:35.373 "assigned_rate_limits": { 00:04:35.373 "rw_ios_per_sec": 0, 00:04:35.373 "rw_mbytes_per_sec": 0, 00:04:35.373 "r_mbytes_per_sec": 0, 00:04:35.373 "w_mbytes_per_sec": 0 00:04:35.373 }, 00:04:35.373 "claimed": true, 00:04:35.373 "claim_type": "exclusive_write", 00:04:35.373 "zoned": false, 00:04:35.373 "supported_io_types": { 00:04:35.373 "read": true, 00:04:35.373 "write": true, 00:04:35.373 "unmap": true, 00:04:35.373 "flush": true, 00:04:35.373 "reset": true, 00:04:35.373 "nvme_admin": false, 00:04:35.373 "nvme_io": false, 00:04:35.373 "nvme_io_md": false, 00:04:35.373 "write_zeroes": true, 00:04:35.373 "zcopy": true, 00:04:35.373 "get_zone_info": false, 00:04:35.373 "zone_management": false, 00:04:35.373 "zone_append": false, 00:04:35.373 "compare": false, 00:04:35.373 "compare_and_write": false, 00:04:35.373 "abort": true, 00:04:35.373 "seek_hole": false, 00:04:35.373 "seek_data": false, 00:04:35.373 "copy": true, 00:04:35.373 "nvme_iov_md": false 00:04:35.373 }, 00:04:35.373 "memory_domains": [ 00:04:35.373 { 00:04:35.373 "dma_device_id": "system", 00:04:35.373 "dma_device_type": 1 00:04:35.373 }, 00:04:35.373 { 00:04:35.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.373 "dma_device_type": 2 00:04:35.373 } 00:04:35.373 ], 00:04:35.373 "driver_specific": {} 00:04:35.373 }, 00:04:35.373 { 00:04:35.373 "name": "Passthru0", 00:04:35.373 "aliases": [ 00:04:35.373 "5272b2da-d3b2-5861-ac8c-3d72176530b9" 00:04:35.373 ], 00:04:35.373 "product_name": "passthru", 00:04:35.373 "block_size": 512, 00:04:35.373 "num_blocks": 16384, 00:04:35.373 "uuid": "5272b2da-d3b2-5861-ac8c-3d72176530b9", 00:04:35.373 "assigned_rate_limits": { 00:04:35.373 "rw_ios_per_sec": 0, 00:04:35.373 "rw_mbytes_per_sec": 0, 00:04:35.373 "r_mbytes_per_sec": 0, 00:04:35.373 "w_mbytes_per_sec": 0 00:04:35.373 }, 00:04:35.373 "claimed": false, 00:04:35.373 "zoned": false, 00:04:35.373 "supported_io_types": { 00:04:35.373 "read": true, 00:04:35.373 "write": true, 00:04:35.373 "unmap": true, 00:04:35.373 "flush": true, 00:04:35.373 "reset": true, 00:04:35.373 "nvme_admin": false, 00:04:35.373 "nvme_io": false, 00:04:35.373 "nvme_io_md": false, 00:04:35.373 "write_zeroes": true, 00:04:35.373 "zcopy": true, 00:04:35.373 "get_zone_info": false, 00:04:35.373 "zone_management": false, 00:04:35.373 "zone_append": false, 00:04:35.373 "compare": false, 00:04:35.373 "compare_and_write": false, 00:04:35.373 "abort": true, 00:04:35.373 "seek_hole": false, 00:04:35.373 "seek_data": false, 00:04:35.373 "copy": true, 00:04:35.373 "nvme_iov_md": false 00:04:35.373 }, 00:04:35.373 "memory_domains": [ 00:04:35.373 { 00:04:35.373 "dma_device_id": "system", 00:04:35.373 "dma_device_type": 1 00:04:35.374 }, 00:04:35.374 { 00:04:35.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.374 "dma_device_type": 2 00:04:35.374 } 00:04:35.374 ], 00:04:35.374 "driver_specific": { 00:04:35.374 "passthru": { 00:04:35.374 "name": "Passthru0", 00:04:35.374 "base_bdev_name": "Malloc0" 00:04:35.374 } 00:04:35.374 } 00:04:35.374 } 00:04:35.374 ]' 00:04:35.374 13:38:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.374 13:38:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.374 13:38:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.374 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.374 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.374 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.374 13:38:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:35.374 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.374 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.374 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.374 13:38:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.374 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.374 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.374 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.374 13:38:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.374 13:38:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.642 ************************************ 00:04:35.642 END TEST rpc_integrity 00:04:35.642 ************************************ 00:04:35.642 13:38:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.642 00:04:35.642 real 0m0.257s 00:04:35.642 user 0m0.136s 00:04:35.642 sys 0m0.031s 00:04:35.642 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.642 13:38:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.642 13:38:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:35.642 13:38:49 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.642 13:38:49 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.642 13:38:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.642 ************************************ 00:04:35.642 START TEST rpc_plugins 00:04:35.642 ************************************ 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:35.642 13:38:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.642 13:38:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:35.642 13:38:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.642 13:38:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:35.642 { 00:04:35.642 "name": "Malloc1", 00:04:35.642 "aliases": [ 00:04:35.642 "65a1eea2-ae5b-4596-be97-3694b289c935" 00:04:35.642 ], 00:04:35.642 "product_name": "Malloc disk", 00:04:35.642 "block_size": 4096, 00:04:35.642 "num_blocks": 256, 00:04:35.642 "uuid": "65a1eea2-ae5b-4596-be97-3694b289c935", 00:04:35.642 "assigned_rate_limits": { 00:04:35.642 "rw_ios_per_sec": 0, 00:04:35.642 "rw_mbytes_per_sec": 0, 00:04:35.642 "r_mbytes_per_sec": 0, 00:04:35.642 "w_mbytes_per_sec": 0 00:04:35.642 }, 00:04:35.642 "claimed": false, 00:04:35.642 "zoned": false, 00:04:35.642 "supported_io_types": { 00:04:35.642 "read": true, 00:04:35.642 "write": true, 00:04:35.642 "unmap": true, 00:04:35.642 "flush": true, 00:04:35.642 "reset": true, 00:04:35.642 "nvme_admin": false, 00:04:35.642 "nvme_io": false, 00:04:35.642 "nvme_io_md": false, 00:04:35.642 "write_zeroes": true, 00:04:35.642 "zcopy": true, 00:04:35.642 "get_zone_info": false, 00:04:35.642 "zone_management": false, 00:04:35.642 "zone_append": false, 00:04:35.642 "compare": false, 00:04:35.642 "compare_and_write": false, 00:04:35.642 "abort": true, 00:04:35.642 "seek_hole": false, 00:04:35.642 "seek_data": false, 00:04:35.642 "copy": true, 00:04:35.642 "nvme_iov_md": false 00:04:35.642 }, 00:04:35.642 "memory_domains": [ 00:04:35.642 { 00:04:35.642 "dma_device_id": "system", 00:04:35.642 "dma_device_type": 1 00:04:35.642 }, 00:04:35.642 { 00:04:35.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.642 "dma_device_type": 2 00:04:35.642 } 00:04:35.642 ], 00:04:35.642 "driver_specific": {} 00:04:35.642 } 00:04:35.642 ]' 00:04:35.642 13:38:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:35.642 13:38:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:35.642 13:38:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.642 13:38:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.642 13:38:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:35.642 13:38:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:35.642 ************************************ 00:04:35.642 END TEST rpc_plugins 00:04:35.642 ************************************ 00:04:35.642 13:38:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:35.642 00:04:35.642 real 0m0.115s 00:04:35.642 user 0m0.063s 00:04:35.642 sys 0m0.015s 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.642 13:38:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.642 13:38:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:35.642 13:38:49 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.642 13:38:49 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.642 13:38:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.642 ************************************ 00:04:35.642 START TEST rpc_trace_cmd_test 00:04:35.642 ************************************ 00:04:35.642 13:38:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:35.642 13:38:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:35.642 13:38:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:35.642 13:38:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.642 13:38:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.642 13:38:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.642 13:38:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:35.642 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57128", 00:04:35.642 "tpoint_group_mask": "0x8", 00:04:35.642 "iscsi_conn": { 00:04:35.642 "mask": "0x2", 00:04:35.642 "tpoint_mask": "0x0" 00:04:35.642 }, 00:04:35.642 "scsi": { 00:04:35.642 "mask": "0x4", 00:04:35.642 "tpoint_mask": "0x0" 00:04:35.642 }, 00:04:35.642 "bdev": { 00:04:35.642 "mask": "0x8", 00:04:35.642 "tpoint_mask": "0xffffffffffffffff" 00:04:35.642 }, 00:04:35.642 "nvmf_rdma": { 00:04:35.642 "mask": "0x10", 00:04:35.642 "tpoint_mask": "0x0" 00:04:35.642 }, 00:04:35.642 "nvmf_tcp": { 00:04:35.642 "mask": "0x20", 00:04:35.642 "tpoint_mask": "0x0" 00:04:35.642 }, 00:04:35.642 "ftl": { 00:04:35.642 "mask": "0x40", 00:04:35.642 "tpoint_mask": "0x0" 00:04:35.642 }, 00:04:35.642 "blobfs": { 00:04:35.642 "mask": "0x80", 00:04:35.642 "tpoint_mask": "0x0" 00:04:35.642 }, 00:04:35.642 "dsa": { 00:04:35.642 "mask": "0x200", 00:04:35.642 "tpoint_mask": "0x0" 00:04:35.642 }, 00:04:35.642 "thread": { 00:04:35.642 "mask": "0x400", 00:04:35.642 "tpoint_mask": "0x0" 00:04:35.642 }, 00:04:35.642 "nvme_pcie": { 00:04:35.642 "mask": "0x800", 00:04:35.642 "tpoint_mask": "0x0" 00:04:35.642 }, 00:04:35.642 "iaa": { 00:04:35.642 "mask": "0x1000", 00:04:35.642 "tpoint_mask": "0x0" 00:04:35.643 }, 00:04:35.643 "nvme_tcp": { 00:04:35.643 "mask": "0x2000", 00:04:35.643 "tpoint_mask": "0x0" 00:04:35.643 }, 00:04:35.643 "bdev_nvme": { 00:04:35.643 "mask": "0x4000", 00:04:35.643 "tpoint_mask": "0x0" 00:04:35.643 }, 00:04:35.643 "sock": { 00:04:35.643 "mask": "0x8000", 00:04:35.643 "tpoint_mask": "0x0" 00:04:35.643 }, 00:04:35.643 "blob": { 00:04:35.643 "mask": "0x10000", 00:04:35.643 "tpoint_mask": "0x0" 00:04:35.643 }, 00:04:35.643 "bdev_raid": { 00:04:35.643 "mask": "0x20000", 00:04:35.643 "tpoint_mask": "0x0" 00:04:35.643 }, 00:04:35.643 "scheduler": { 00:04:35.643 "mask": "0x40000", 00:04:35.643 "tpoint_mask": "0x0" 00:04:35.643 } 00:04:35.643 }' 00:04:35.643 13:38:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:35.951 13:38:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:35.951 13:38:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:35.951 13:38:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:35.951 13:38:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:35.951 13:38:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:35.951 13:38:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:35.951 13:38:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:35.951 13:38:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:35.951 ************************************ 00:04:35.951 END TEST rpc_trace_cmd_test 00:04:35.951 ************************************ 00:04:35.951 13:38:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:35.951 00:04:35.951 real 0m0.184s 00:04:35.951 user 0m0.142s 00:04:35.951 sys 0m0.030s 00:04:35.951 13:38:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.951 13:38:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.951 13:38:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:35.951 13:38:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:35.951 13:38:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:35.951 13:38:49 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.951 13:38:49 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.951 13:38:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.951 ************************************ 00:04:35.951 START TEST rpc_daemon_integrity 00:04:35.951 ************************************ 00:04:35.951 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:35.951 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.951 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.951 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.952 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.952 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.952 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.952 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.952 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.952 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.952 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.952 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.952 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:35.952 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.952 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.952 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.213 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.213 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.213 { 00:04:36.213 "name": "Malloc2", 00:04:36.213 "aliases": [ 00:04:36.213 "4f9ee02a-9a18-4dd0-a80f-60b525cdf84e" 00:04:36.213 ], 00:04:36.213 "product_name": "Malloc disk", 00:04:36.213 "block_size": 512, 00:04:36.213 "num_blocks": 16384, 00:04:36.213 "uuid": "4f9ee02a-9a18-4dd0-a80f-60b525cdf84e", 00:04:36.213 "assigned_rate_limits": { 00:04:36.213 "rw_ios_per_sec": 0, 00:04:36.213 "rw_mbytes_per_sec": 0, 00:04:36.213 "r_mbytes_per_sec": 0, 00:04:36.213 "w_mbytes_per_sec": 0 00:04:36.213 }, 00:04:36.213 "claimed": false, 00:04:36.213 "zoned": false, 00:04:36.213 "supported_io_types": { 00:04:36.213 "read": true, 00:04:36.213 "write": true, 00:04:36.213 "unmap": true, 00:04:36.213 "flush": true, 00:04:36.213 "reset": true, 00:04:36.213 "nvme_admin": false, 00:04:36.213 "nvme_io": false, 00:04:36.213 "nvme_io_md": false, 00:04:36.213 "write_zeroes": true, 00:04:36.213 "zcopy": true, 00:04:36.213 "get_zone_info": false, 00:04:36.213 "zone_management": false, 00:04:36.213 "zone_append": false, 00:04:36.213 "compare": false, 00:04:36.213 "compare_and_write": false, 00:04:36.213 "abort": true, 00:04:36.213 "seek_hole": false, 00:04:36.213 "seek_data": false, 00:04:36.213 "copy": true, 00:04:36.213 "nvme_iov_md": false 00:04:36.213 }, 00:04:36.213 "memory_domains": [ 00:04:36.213 { 00:04:36.213 "dma_device_id": "system", 00:04:36.213 "dma_device_type": 1 00:04:36.213 }, 00:04:36.213 { 00:04:36.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.213 "dma_device_type": 2 00:04:36.213 } 00:04:36.213 ], 00:04:36.213 "driver_specific": {} 00:04:36.213 } 00:04:36.213 ]' 00:04:36.213 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:36.213 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.213 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:36.213 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.213 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.213 [2024-10-15 13:38:49.758629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:36.213 [2024-10-15 13:38:49.758704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.213 [2024-10-15 13:38:49.758729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:36.213 [2024-10-15 13:38:49.758742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.213 [2024-10-15 13:38:49.761202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.213 [2024-10-15 13:38:49.761273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.213 Passthru0 00:04:36.213 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.213 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.213 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.213 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.213 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.213 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.213 { 00:04:36.213 "name": "Malloc2", 00:04:36.213 "aliases": [ 00:04:36.213 "4f9ee02a-9a18-4dd0-a80f-60b525cdf84e" 00:04:36.213 ], 00:04:36.213 "product_name": "Malloc disk", 00:04:36.213 "block_size": 512, 00:04:36.213 "num_blocks": 16384, 00:04:36.213 "uuid": "4f9ee02a-9a18-4dd0-a80f-60b525cdf84e", 00:04:36.213 "assigned_rate_limits": { 00:04:36.213 "rw_ios_per_sec": 0, 00:04:36.213 "rw_mbytes_per_sec": 0, 00:04:36.213 "r_mbytes_per_sec": 0, 00:04:36.213 "w_mbytes_per_sec": 0 00:04:36.213 }, 00:04:36.213 "claimed": true, 00:04:36.213 "claim_type": "exclusive_write", 00:04:36.213 "zoned": false, 00:04:36.213 "supported_io_types": { 00:04:36.213 "read": true, 00:04:36.213 "write": true, 00:04:36.213 "unmap": true, 00:04:36.213 "flush": true, 00:04:36.213 "reset": true, 00:04:36.213 "nvme_admin": false, 00:04:36.213 "nvme_io": false, 00:04:36.213 "nvme_io_md": false, 00:04:36.213 "write_zeroes": true, 00:04:36.213 "zcopy": true, 00:04:36.213 "get_zone_info": false, 00:04:36.213 "zone_management": false, 00:04:36.213 "zone_append": false, 00:04:36.213 "compare": false, 00:04:36.214 "compare_and_write": false, 00:04:36.214 "abort": true, 00:04:36.214 "seek_hole": false, 00:04:36.214 "seek_data": false, 00:04:36.214 "copy": true, 00:04:36.214 "nvme_iov_md": false 00:04:36.214 }, 00:04:36.214 "memory_domains": [ 00:04:36.214 { 00:04:36.214 "dma_device_id": "system", 00:04:36.214 "dma_device_type": 1 00:04:36.214 }, 00:04:36.214 { 00:04:36.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.214 "dma_device_type": 2 00:04:36.214 } 00:04:36.214 ], 00:04:36.214 "driver_specific": {} 00:04:36.214 }, 00:04:36.214 { 00:04:36.214 "name": "Passthru0", 00:04:36.214 "aliases": [ 00:04:36.214 "a4530cd4-12de-5f4c-b20c-e63569f320b4" 00:04:36.214 ], 00:04:36.214 "product_name": "passthru", 00:04:36.214 "block_size": 512, 00:04:36.214 "num_blocks": 16384, 00:04:36.214 "uuid": "a4530cd4-12de-5f4c-b20c-e63569f320b4", 00:04:36.214 "assigned_rate_limits": { 00:04:36.214 "rw_ios_per_sec": 0, 00:04:36.214 "rw_mbytes_per_sec": 0, 00:04:36.214 "r_mbytes_per_sec": 0, 00:04:36.214 "w_mbytes_per_sec": 0 00:04:36.214 }, 00:04:36.214 "claimed": false, 00:04:36.214 "zoned": false, 00:04:36.214 "supported_io_types": { 00:04:36.214 "read": true, 00:04:36.214 "write": true, 00:04:36.214 "unmap": true, 00:04:36.214 "flush": true, 00:04:36.214 "reset": true, 00:04:36.214 "nvme_admin": false, 00:04:36.214 "nvme_io": false, 00:04:36.214 "nvme_io_md": false, 00:04:36.214 "write_zeroes": true, 00:04:36.214 "zcopy": true, 00:04:36.214 "get_zone_info": false, 00:04:36.214 "zone_management": false, 00:04:36.214 "zone_append": false, 00:04:36.214 "compare": false, 00:04:36.214 "compare_and_write": false, 00:04:36.214 "abort": true, 00:04:36.214 "seek_hole": false, 00:04:36.214 "seek_data": false, 00:04:36.214 "copy": true, 00:04:36.214 "nvme_iov_md": false 00:04:36.214 }, 00:04:36.214 "memory_domains": [ 00:04:36.214 { 00:04:36.214 "dma_device_id": "system", 00:04:36.214 "dma_device_type": 1 00:04:36.214 }, 00:04:36.214 { 00:04:36.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.214 "dma_device_type": 2 00:04:36.214 } 00:04:36.214 ], 00:04:36.214 "driver_specific": { 00:04:36.214 "passthru": { 00:04:36.214 "name": "Passthru0", 00:04:36.214 "base_bdev_name": "Malloc2" 00:04:36.214 } 00:04:36.214 } 00:04:36.214 } 00:04:36.214 ]' 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:36.214 ************************************ 00:04:36.214 END TEST rpc_daemon_integrity 00:04:36.214 ************************************ 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.214 00:04:36.214 real 0m0.245s 00:04:36.214 user 0m0.126s 00:04:36.214 sys 0m0.035s 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.214 13:38:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.214 13:38:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:36.214 13:38:49 rpc -- rpc/rpc.sh@84 -- # killprocess 57128 00:04:36.214 13:38:49 rpc -- common/autotest_common.sh@950 -- # '[' -z 57128 ']' 00:04:36.214 13:38:49 rpc -- common/autotest_common.sh@954 -- # kill -0 57128 00:04:36.214 13:38:49 rpc -- common/autotest_common.sh@955 -- # uname 00:04:36.214 13:38:49 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.214 13:38:49 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57128 00:04:36.214 killing process with pid 57128 00:04:36.214 13:38:49 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.214 13:38:49 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.214 13:38:49 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57128' 00:04:36.214 13:38:49 rpc -- common/autotest_common.sh@969 -- # kill 57128 00:04:36.214 13:38:49 rpc -- common/autotest_common.sh@974 -- # wait 57128 00:04:38.129 00:04:38.129 real 0m3.786s 00:04:38.129 user 0m4.110s 00:04:38.129 sys 0m0.737s 00:04:38.129 13:38:51 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.129 13:38:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.129 ************************************ 00:04:38.129 END TEST rpc 00:04:38.129 ************************************ 00:04:38.129 13:38:51 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:38.129 13:38:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.129 13:38:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.129 13:38:51 -- common/autotest_common.sh@10 -- # set +x 00:04:38.129 ************************************ 00:04:38.129 START TEST skip_rpc 00:04:38.129 ************************************ 00:04:38.129 13:38:51 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:38.129 * Looking for test storage... 00:04:38.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.129 13:38:51 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:38.129 13:38:51 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:38.129 13:38:51 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:38.129 13:38:51 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.129 13:38:51 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:38.129 13:38:51 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.129 13:38:51 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:38.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.129 --rc genhtml_branch_coverage=1 00:04:38.129 --rc genhtml_function_coverage=1 00:04:38.129 --rc genhtml_legend=1 00:04:38.129 --rc geninfo_all_blocks=1 00:04:38.129 --rc geninfo_unexecuted_blocks=1 00:04:38.129 00:04:38.129 ' 00:04:38.129 13:38:51 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:38.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.129 --rc genhtml_branch_coverage=1 00:04:38.129 --rc genhtml_function_coverage=1 00:04:38.129 --rc genhtml_legend=1 00:04:38.129 --rc geninfo_all_blocks=1 00:04:38.129 --rc geninfo_unexecuted_blocks=1 00:04:38.129 00:04:38.129 ' 00:04:38.129 13:38:51 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:38.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.129 --rc genhtml_branch_coverage=1 00:04:38.129 --rc genhtml_function_coverage=1 00:04:38.129 --rc genhtml_legend=1 00:04:38.129 --rc geninfo_all_blocks=1 00:04:38.129 --rc geninfo_unexecuted_blocks=1 00:04:38.129 00:04:38.129 ' 00:04:38.129 13:38:51 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:38.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.130 --rc genhtml_branch_coverage=1 00:04:38.130 --rc genhtml_function_coverage=1 00:04:38.130 --rc genhtml_legend=1 00:04:38.130 --rc geninfo_all_blocks=1 00:04:38.130 --rc geninfo_unexecuted_blocks=1 00:04:38.130 00:04:38.130 ' 00:04:38.130 13:38:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:38.130 13:38:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.130 13:38:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:38.130 13:38:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.130 13:38:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.130 13:38:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.130 ************************************ 00:04:38.130 START TEST skip_rpc 00:04:38.130 ************************************ 00:04:38.130 13:38:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:38.130 13:38:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57341 00:04:38.130 13:38:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.130 13:38:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:38.130 13:38:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:38.130 [2024-10-15 13:38:51.774396] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:04:38.130 [2024-10-15 13:38:51.774545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57341 ] 00:04:38.390 [2024-10-15 13:38:51.928237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.390 [2024-10-15 13:38:52.050183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57341 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57341 ']' 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57341 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57341 00:04:43.676 killing process with pid 57341 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57341' 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57341 00:04:43.676 13:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57341 00:04:44.247 00:04:44.247 real 0m6.201s 00:04:44.247 user 0m5.740s 00:04:44.247 sys 0m0.350s 00:04:44.247 ************************************ 00:04:44.247 END TEST skip_rpc 00:04:44.247 ************************************ 00:04:44.247 13:38:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.247 13:38:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.247 13:38:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:44.247 13:38:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.247 13:38:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.247 13:38:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.247 ************************************ 00:04:44.247 START TEST skip_rpc_with_json 00:04:44.247 ************************************ 00:04:44.248 13:38:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:44.248 13:38:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:44.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.248 13:38:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57439 00:04:44.248 13:38:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.248 13:38:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57439 00:04:44.248 13:38:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.248 13:38:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57439 ']' 00:04:44.248 13:38:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.248 13:38:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.248 13:38:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.248 13:38:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.248 13:38:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.248 [2024-10-15 13:38:58.019381] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:04:44.248 [2024-10-15 13:38:58.020051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57439 ] 00:04:44.508 [2024-10-15 13:38:58.170074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.768 [2024-10-15 13:38:58.298321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.341 13:38:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.341 13:38:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:45.341 13:38:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:45.341 13:38:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.341 13:38:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.341 [2024-10-15 13:38:58.990311] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:45.341 request: 00:04:45.341 { 00:04:45.341 "trtype": "tcp", 00:04:45.341 "method": "nvmf_get_transports", 00:04:45.341 "req_id": 1 00:04:45.341 } 00:04:45.341 Got JSON-RPC error response 00:04:45.341 response: 00:04:45.341 { 00:04:45.341 "code": -19, 00:04:45.341 "message": "No such device" 00:04:45.341 } 00:04:45.341 13:38:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:45.341 13:38:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:45.341 13:38:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.341 13:38:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.341 [2024-10-15 13:38:59.002418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.341 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.341 13:38:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:45.341 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.341 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.603 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.603 13:38:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:45.603 { 00:04:45.603 "subsystems": [ 00:04:45.603 { 00:04:45.603 "subsystem": "fsdev", 00:04:45.603 "config": [ 00:04:45.603 { 00:04:45.603 "method": "fsdev_set_opts", 00:04:45.603 "params": { 00:04:45.603 "fsdev_io_pool_size": 65535, 00:04:45.603 "fsdev_io_cache_size": 256 00:04:45.603 } 00:04:45.603 } 00:04:45.603 ] 00:04:45.603 }, 00:04:45.603 { 00:04:45.603 "subsystem": "keyring", 00:04:45.603 "config": [] 00:04:45.603 }, 00:04:45.603 { 00:04:45.603 "subsystem": "iobuf", 00:04:45.603 "config": [ 00:04:45.603 { 00:04:45.603 "method": "iobuf_set_options", 00:04:45.603 "params": { 00:04:45.603 "small_pool_count": 8192, 00:04:45.603 "large_pool_count": 1024, 00:04:45.603 "small_bufsize": 8192, 00:04:45.603 "large_bufsize": 135168 00:04:45.603 } 00:04:45.603 } 00:04:45.603 ] 00:04:45.603 }, 00:04:45.603 { 00:04:45.603 "subsystem": "sock", 00:04:45.603 "config": [ 00:04:45.603 { 00:04:45.603 "method": "sock_set_default_impl", 00:04:45.603 "params": { 00:04:45.603 "impl_name": "posix" 00:04:45.603 } 00:04:45.603 }, 00:04:45.603 { 00:04:45.603 "method": "sock_impl_set_options", 00:04:45.603 "params": { 00:04:45.603 "impl_name": "ssl", 00:04:45.603 "recv_buf_size": 4096, 00:04:45.603 "send_buf_size": 4096, 00:04:45.603 "enable_recv_pipe": true, 00:04:45.603 "enable_quickack": false, 00:04:45.603 "enable_placement_id": 0, 00:04:45.603 "enable_zerocopy_send_server": true, 00:04:45.603 "enable_zerocopy_send_client": false, 00:04:45.603 "zerocopy_threshold": 0, 00:04:45.603 "tls_version": 0, 00:04:45.603 "enable_ktls": false 00:04:45.603 } 00:04:45.603 }, 00:04:45.603 { 00:04:45.603 "method": "sock_impl_set_options", 00:04:45.603 "params": { 00:04:45.603 "impl_name": "posix", 00:04:45.603 "recv_buf_size": 2097152, 00:04:45.603 "send_buf_size": 2097152, 00:04:45.603 "enable_recv_pipe": true, 00:04:45.603 "enable_quickack": false, 00:04:45.603 "enable_placement_id": 0, 00:04:45.603 "enable_zerocopy_send_server": true, 00:04:45.603 "enable_zerocopy_send_client": false, 00:04:45.603 "zerocopy_threshold": 0, 00:04:45.603 "tls_version": 0, 00:04:45.603 "enable_ktls": false 00:04:45.603 } 00:04:45.603 } 00:04:45.603 ] 00:04:45.603 }, 00:04:45.603 { 00:04:45.603 "subsystem": "vmd", 00:04:45.603 "config": [] 00:04:45.603 }, 00:04:45.603 { 00:04:45.603 "subsystem": "accel", 00:04:45.603 "config": [ 00:04:45.603 { 00:04:45.603 "method": "accel_set_options", 00:04:45.603 "params": { 00:04:45.603 "small_cache_size": 128, 00:04:45.603 "large_cache_size": 16, 00:04:45.603 "task_count": 2048, 00:04:45.603 "sequence_count": 2048, 00:04:45.603 "buf_count": 2048 00:04:45.603 } 00:04:45.603 } 00:04:45.603 ] 00:04:45.603 }, 00:04:45.603 { 00:04:45.603 "subsystem": "bdev", 00:04:45.603 "config": [ 00:04:45.603 { 00:04:45.603 "method": "bdev_set_options", 00:04:45.603 "params": { 00:04:45.603 "bdev_io_pool_size": 65535, 00:04:45.603 "bdev_io_cache_size": 256, 00:04:45.603 "bdev_auto_examine": true, 00:04:45.603 "iobuf_small_cache_size": 128, 00:04:45.603 "iobuf_large_cache_size": 16 00:04:45.603 } 00:04:45.603 }, 00:04:45.603 { 00:04:45.603 "method": "bdev_raid_set_options", 00:04:45.603 "params": { 00:04:45.603 "process_window_size_kb": 1024, 00:04:45.603 "process_max_bandwidth_mb_sec": 0 00:04:45.603 } 00:04:45.603 }, 00:04:45.603 { 00:04:45.603 "method": "bdev_iscsi_set_options", 00:04:45.603 "params": { 00:04:45.603 "timeout_sec": 30 00:04:45.603 } 00:04:45.603 }, 00:04:45.603 { 00:04:45.603 "method": "bdev_nvme_set_options", 00:04:45.603 "params": { 00:04:45.603 "action_on_timeout": "none", 00:04:45.603 "timeout_us": 0, 00:04:45.603 "timeout_admin_us": 0, 00:04:45.603 "keep_alive_timeout_ms": 10000, 00:04:45.603 "arbitration_burst": 0, 00:04:45.603 "low_priority_weight": 0, 00:04:45.603 "medium_priority_weight": 0, 00:04:45.603 "high_priority_weight": 0, 00:04:45.603 "nvme_adminq_poll_period_us": 10000, 00:04:45.603 "nvme_ioq_poll_period_us": 0, 00:04:45.603 "io_queue_requests": 0, 00:04:45.603 "delay_cmd_submit": true, 00:04:45.603 "transport_retry_count": 4, 00:04:45.603 "bdev_retry_count": 3, 00:04:45.603 "transport_ack_timeout": 0, 00:04:45.603 "ctrlr_loss_timeout_sec": 0, 00:04:45.603 "reconnect_delay_sec": 0, 00:04:45.603 "fast_io_fail_timeout_sec": 0, 00:04:45.603 "disable_auto_failback": false, 00:04:45.603 "generate_uuids": false, 00:04:45.603 "transport_tos": 0, 00:04:45.603 "nvme_error_stat": false, 00:04:45.603 "rdma_srq_size": 0, 00:04:45.604 "io_path_stat": false, 00:04:45.604 "allow_accel_sequence": false, 00:04:45.604 "rdma_max_cq_size": 0, 00:04:45.604 "rdma_cm_event_timeout_ms": 0, 00:04:45.604 "dhchap_digests": [ 00:04:45.604 "sha256", 00:04:45.604 "sha384", 00:04:45.604 "sha512" 00:04:45.604 ], 00:04:45.604 "dhchap_dhgroups": [ 00:04:45.604 "null", 00:04:45.604 "ffdhe2048", 00:04:45.604 "ffdhe3072", 00:04:45.604 "ffdhe4096", 00:04:45.604 "ffdhe6144", 00:04:45.604 "ffdhe8192" 00:04:45.604 ] 00:04:45.604 } 00:04:45.604 }, 00:04:45.604 { 00:04:45.604 "method": "bdev_nvme_set_hotplug", 00:04:45.604 "params": { 00:04:45.604 "period_us": 100000, 00:04:45.604 "enable": false 00:04:45.604 } 00:04:45.604 }, 00:04:45.604 { 00:04:45.604 "method": "bdev_wait_for_examine" 00:04:45.604 } 00:04:45.604 ] 00:04:45.604 }, 00:04:45.604 { 00:04:45.604 "subsystem": "scsi", 00:04:45.604 "config": null 00:04:45.604 }, 00:04:45.604 { 00:04:45.604 "subsystem": "scheduler", 00:04:45.604 "config": [ 00:04:45.604 { 00:04:45.604 "method": "framework_set_scheduler", 00:04:45.604 "params": { 00:04:45.604 "name": "static" 00:04:45.604 } 00:04:45.604 } 00:04:45.604 ] 00:04:45.604 }, 00:04:45.604 { 00:04:45.604 "subsystem": "vhost_scsi", 00:04:45.604 "config": [] 00:04:45.604 }, 00:04:45.604 { 00:04:45.604 "subsystem": "vhost_blk", 00:04:45.604 "config": [] 00:04:45.604 }, 00:04:45.604 { 00:04:45.604 "subsystem": "ublk", 00:04:45.604 "config": [] 00:04:45.604 }, 00:04:45.604 { 00:04:45.604 "subsystem": "nbd", 00:04:45.604 "config": [] 00:04:45.604 }, 00:04:45.604 { 00:04:45.604 "subsystem": "nvmf", 00:04:45.604 "config": [ 00:04:45.604 { 00:04:45.604 "method": "nvmf_set_config", 00:04:45.604 "params": { 00:04:45.604 "discovery_filter": "match_any", 00:04:45.604 "admin_cmd_passthru": { 00:04:45.604 "identify_ctrlr": false 00:04:45.604 }, 00:04:45.604 "dhchap_digests": [ 00:04:45.604 "sha256", 00:04:45.604 "sha384", 00:04:45.604 "sha512" 00:04:45.604 ], 00:04:45.604 "dhchap_dhgroups": [ 00:04:45.604 "null", 00:04:45.604 "ffdhe2048", 00:04:45.604 "ffdhe3072", 00:04:45.604 "ffdhe4096", 00:04:45.604 "ffdhe6144", 00:04:45.604 "ffdhe8192" 00:04:45.604 ] 00:04:45.604 } 00:04:45.604 }, 00:04:45.604 { 00:04:45.604 "method": "nvmf_set_max_subsystems", 00:04:45.604 "params": { 00:04:45.604 "max_subsystems": 1024 00:04:45.604 } 00:04:45.604 }, 00:04:45.604 { 00:04:45.604 "method": "nvmf_set_crdt", 00:04:45.604 "params": { 00:04:45.604 "crdt1": 0, 00:04:45.604 "crdt2": 0, 00:04:45.604 "crdt3": 0 00:04:45.604 } 00:04:45.604 }, 00:04:45.604 { 00:04:45.604 "method": "nvmf_create_transport", 00:04:45.604 "params": { 00:04:45.604 "trtype": "TCP", 00:04:45.604 "max_queue_depth": 128, 00:04:45.604 "max_io_qpairs_per_ctrlr": 127, 00:04:45.604 "in_capsule_data_size": 4096, 00:04:45.604 "max_io_size": 131072, 00:04:45.604 "io_unit_size": 131072, 00:04:45.604 "max_aq_depth": 128, 00:04:45.604 "num_shared_buffers": 511, 00:04:45.604 "buf_cache_size": 4294967295, 00:04:45.604 "dif_insert_or_strip": false, 00:04:45.604 "zcopy": false, 00:04:45.604 "c2h_success": true, 00:04:45.604 "sock_priority": 0, 00:04:45.604 "abort_timeout_sec": 1, 00:04:45.604 "ack_timeout": 0, 00:04:45.604 "data_wr_pool_size": 0 00:04:45.604 } 00:04:45.604 } 00:04:45.604 ] 00:04:45.604 }, 00:04:45.604 { 00:04:45.604 "subsystem": "iscsi", 00:04:45.604 "config": [ 00:04:45.604 { 00:04:45.604 "method": "iscsi_set_options", 00:04:45.604 "params": { 00:04:45.604 "node_base": "iqn.2016-06.io.spdk", 00:04:45.604 "max_sessions": 128, 00:04:45.604 "max_connections_per_session": 2, 00:04:45.604 "max_queue_depth": 64, 00:04:45.604 "default_time2wait": 2, 00:04:45.604 "default_time2retain": 20, 00:04:45.604 "first_burst_length": 8192, 00:04:45.604 "immediate_data": true, 00:04:45.604 "allow_duplicated_isid": false, 00:04:45.604 "error_recovery_level": 0, 00:04:45.604 "nop_timeout": 60, 00:04:45.604 "nop_in_interval": 30, 00:04:45.604 "disable_chap": false, 00:04:45.604 "require_chap": false, 00:04:45.604 "mutual_chap": false, 00:04:45.604 "chap_group": 0, 00:04:45.604 "max_large_datain_per_connection": 64, 00:04:45.604 "max_r2t_per_connection": 4, 00:04:45.604 "pdu_pool_size": 36864, 00:04:45.604 "immediate_data_pool_size": 16384, 00:04:45.604 "data_out_pool_size": 2048 00:04:45.604 } 00:04:45.604 } 00:04:45.604 ] 00:04:45.604 } 00:04:45.604 ] 00:04:45.604 } 00:04:45.604 13:38:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:45.604 13:38:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57439 00:04:45.604 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57439 ']' 00:04:45.604 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57439 00:04:45.604 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:45.604 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:45.604 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57439 00:04:45.604 killing process with pid 57439 00:04:45.604 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:45.604 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:45.604 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57439' 00:04:45.604 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57439 00:04:45.604 13:38:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57439 00:04:46.990 13:39:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57479 00:04:46.990 13:39:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:46.990 13:39:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:52.319 13:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57479 00:04:52.320 13:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57479 ']' 00:04:52.320 13:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57479 00:04:52.320 13:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:52.320 13:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.320 13:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57479 00:04:52.320 killing process with pid 57479 00:04:52.320 13:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.320 13:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.320 13:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57479' 00:04:52.320 13:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57479 00:04:52.320 13:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57479 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:53.261 ************************************ 00:04:53.261 END TEST skip_rpc_with_json 00:04:53.261 ************************************ 00:04:53.261 00:04:53.261 real 0m8.817s 00:04:53.261 user 0m8.233s 00:04:53.261 sys 0m0.810s 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.261 13:39:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:53.261 13:39:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.261 13:39:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.261 13:39:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.261 ************************************ 00:04:53.261 START TEST skip_rpc_with_delay 00:04:53.261 ************************************ 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.261 [2024-10-15 13:39:06.893448] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:53.261 ************************************ 00:04:53.261 END TEST skip_rpc_with_delay 00:04:53.261 ************************************ 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.261 00:04:53.261 real 0m0.134s 00:04:53.261 user 0m0.073s 00:04:53.261 sys 0m0.059s 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.261 13:39:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:53.261 13:39:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:53.261 13:39:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:53.261 13:39:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:53.261 13:39:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.261 13:39:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.261 13:39:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.261 ************************************ 00:04:53.261 START TEST exit_on_failed_rpc_init 00:04:53.261 ************************************ 00:04:53.261 13:39:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:53.261 13:39:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57596 00:04:53.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.261 13:39:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57596 00:04:53.261 13:39:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57596 ']' 00:04:53.261 13:39:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.261 13:39:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.261 13:39:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.261 13:39:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.261 13:39:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.261 13:39:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.522 [2024-10-15 13:39:07.096474] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:04:53.523 [2024-10-15 13:39:07.097022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57596 ] 00:04:53.523 [2024-10-15 13:39:07.248342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.783 [2024-10-15 13:39:07.368025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:54.354 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.614 [2024-10-15 13:39:08.145773] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:04:54.614 [2024-10-15 13:39:08.146528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57614 ] 00:04:54.614 [2024-10-15 13:39:08.296332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.875 [2024-10-15 13:39:08.418033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.875 [2024-10-15 13:39:08.418131] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:54.875 [2024-10-15 13:39:08.418145] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:54.875 [2024-10-15 13:39:08.418161] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57596 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57596 ']' 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57596 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57596 00:04:54.875 killing process with pid 57596 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57596' 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57596 00:04:54.875 13:39:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57596 00:04:56.257 ************************************ 00:04:56.257 END TEST exit_on_failed_rpc_init 00:04:56.257 ************************************ 00:04:56.257 00:04:56.257 real 0m2.892s 00:04:56.257 user 0m3.106s 00:04:56.257 sys 0m0.562s 00:04:56.257 13:39:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.257 13:39:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.257 13:39:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:56.257 ************************************ 00:04:56.257 END TEST skip_rpc 00:04:56.257 ************************************ 00:04:56.257 00:04:56.257 real 0m18.422s 00:04:56.257 user 0m17.290s 00:04:56.257 sys 0m1.957s 00:04:56.257 13:39:09 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.257 13:39:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.257 13:39:09 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:56.257 13:39:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.257 13:39:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.257 13:39:09 -- common/autotest_common.sh@10 -- # set +x 00:04:56.257 ************************************ 00:04:56.257 START TEST rpc_client 00:04:56.257 ************************************ 00:04:56.257 13:39:10 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:56.516 * Looking for test storage... 00:04:56.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:56.516 13:39:10 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:56.516 13:39:10 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:56.516 13:39:10 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:56.516 13:39:10 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.516 13:39:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:56.516 13:39:10 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.516 13:39:10 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:56.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.516 --rc genhtml_branch_coverage=1 00:04:56.516 --rc genhtml_function_coverage=1 00:04:56.516 --rc genhtml_legend=1 00:04:56.516 --rc geninfo_all_blocks=1 00:04:56.516 --rc geninfo_unexecuted_blocks=1 00:04:56.516 00:04:56.516 ' 00:04:56.516 13:39:10 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:56.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.516 --rc genhtml_branch_coverage=1 00:04:56.516 --rc genhtml_function_coverage=1 00:04:56.516 --rc genhtml_legend=1 00:04:56.516 --rc geninfo_all_blocks=1 00:04:56.516 --rc geninfo_unexecuted_blocks=1 00:04:56.516 00:04:56.516 ' 00:04:56.516 13:39:10 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:56.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.516 --rc genhtml_branch_coverage=1 00:04:56.516 --rc genhtml_function_coverage=1 00:04:56.516 --rc genhtml_legend=1 00:04:56.516 --rc geninfo_all_blocks=1 00:04:56.516 --rc geninfo_unexecuted_blocks=1 00:04:56.516 00:04:56.516 ' 00:04:56.516 13:39:10 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:56.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.516 --rc genhtml_branch_coverage=1 00:04:56.516 --rc genhtml_function_coverage=1 00:04:56.516 --rc genhtml_legend=1 00:04:56.516 --rc geninfo_all_blocks=1 00:04:56.516 --rc geninfo_unexecuted_blocks=1 00:04:56.516 00:04:56.516 ' 00:04:56.516 13:39:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:56.516 OK 00:04:56.516 13:39:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:56.516 00:04:56.516 real 0m0.193s 00:04:56.516 user 0m0.103s 00:04:56.516 sys 0m0.094s 00:04:56.516 13:39:10 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.516 13:39:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:56.516 ************************************ 00:04:56.516 END TEST rpc_client 00:04:56.516 ************************************ 00:04:56.516 13:39:10 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:56.516 13:39:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.516 13:39:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.516 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.516 ************************************ 00:04:56.516 START TEST json_config 00:04:56.516 ************************************ 00:04:56.516 13:39:10 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:56.516 13:39:10 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:56.776 13:39:10 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:56.776 13:39:10 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:56.776 13:39:10 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:56.776 13:39:10 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.776 13:39:10 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.776 13:39:10 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.776 13:39:10 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.776 13:39:10 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.776 13:39:10 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.776 13:39:10 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.776 13:39:10 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.776 13:39:10 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.776 13:39:10 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.776 13:39:10 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.776 13:39:10 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:56.776 13:39:10 json_config -- scripts/common.sh@345 -- # : 1 00:04:56.776 13:39:10 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.776 13:39:10 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.776 13:39:10 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:56.776 13:39:10 json_config -- scripts/common.sh@353 -- # local d=1 00:04:56.776 13:39:10 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.776 13:39:10 json_config -- scripts/common.sh@355 -- # echo 1 00:04:56.776 13:39:10 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.776 13:39:10 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:56.776 13:39:10 json_config -- scripts/common.sh@353 -- # local d=2 00:04:56.776 13:39:10 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.776 13:39:10 json_config -- scripts/common.sh@355 -- # echo 2 00:04:56.776 13:39:10 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.776 13:39:10 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.776 13:39:10 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.776 13:39:10 json_config -- scripts/common.sh@368 -- # return 0 00:04:56.776 13:39:10 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.776 13:39:10 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.776 --rc genhtml_branch_coverage=1 00:04:56.776 --rc genhtml_function_coverage=1 00:04:56.776 --rc genhtml_legend=1 00:04:56.776 --rc geninfo_all_blocks=1 00:04:56.776 --rc geninfo_unexecuted_blocks=1 00:04:56.776 00:04:56.776 ' 00:04:56.776 13:39:10 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.776 --rc genhtml_branch_coverage=1 00:04:56.776 --rc genhtml_function_coverage=1 00:04:56.776 --rc genhtml_legend=1 00:04:56.776 --rc geninfo_all_blocks=1 00:04:56.776 --rc geninfo_unexecuted_blocks=1 00:04:56.776 00:04:56.776 ' 00:04:56.776 13:39:10 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.776 --rc genhtml_branch_coverage=1 00:04:56.776 --rc genhtml_function_coverage=1 00:04:56.776 --rc genhtml_legend=1 00:04:56.776 --rc geninfo_all_blocks=1 00:04:56.776 --rc geninfo_unexecuted_blocks=1 00:04:56.776 00:04:56.776 ' 00:04:56.776 13:39:10 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.776 --rc genhtml_branch_coverage=1 00:04:56.776 --rc genhtml_function_coverage=1 00:04:56.776 --rc genhtml_legend=1 00:04:56.776 --rc geninfo_all_blocks=1 00:04:56.776 --rc geninfo_unexecuted_blocks=1 00:04:56.776 00:04:56.776 ' 00:04:56.776 13:39:10 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cc7ab2e-d7fb-4e4a-87be-c6e45d97844f 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5cc7ab2e-d7fb-4e4a-87be-c6e45d97844f 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.776 13:39:10 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:56.776 13:39:10 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.776 13:39:10 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.776 13:39:10 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.776 13:39:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.776 13:39:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.776 13:39:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.776 13:39:10 json_config -- paths/export.sh@5 -- # export PATH 00:04:56.776 13:39:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@51 -- # : 0 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:56.776 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:56.776 13:39:10 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:56.776 13:39:10 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:56.776 13:39:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:56.776 13:39:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:56.776 13:39:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:56.776 13:39:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:56.776 WARNING: No tests are enabled so not running JSON configuration tests 00:04:56.776 13:39:10 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:56.776 13:39:10 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:56.776 00:04:56.776 real 0m0.138s 00:04:56.776 user 0m0.092s 00:04:56.776 sys 0m0.048s 00:04:56.776 13:39:10 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.776 ************************************ 00:04:56.776 13:39:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.776 END TEST json_config 00:04:56.776 ************************************ 00:04:56.776 13:39:10 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:56.776 13:39:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.776 13:39:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.776 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.776 ************************************ 00:04:56.776 START TEST json_config_extra_key 00:04:56.776 ************************************ 00:04:56.776 13:39:10 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:56.776 13:39:10 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:56.776 13:39:10 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:56.776 13:39:10 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:56.776 13:39:10 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:56.776 13:39:10 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.777 13:39:10 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.777 13:39:10 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.777 13:39:10 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.777 13:39:10 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.777 13:39:10 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.777 13:39:10 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.777 13:39:10 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.036 13:39:10 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:57.036 13:39:10 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.036 13:39:10 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.036 --rc genhtml_branch_coverage=1 00:04:57.036 --rc genhtml_function_coverage=1 00:04:57.036 --rc genhtml_legend=1 00:04:57.036 --rc geninfo_all_blocks=1 00:04:57.036 --rc geninfo_unexecuted_blocks=1 00:04:57.036 00:04:57.036 ' 00:04:57.036 13:39:10 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.036 --rc genhtml_branch_coverage=1 00:04:57.036 --rc genhtml_function_coverage=1 00:04:57.036 --rc genhtml_legend=1 00:04:57.036 --rc geninfo_all_blocks=1 00:04:57.036 --rc geninfo_unexecuted_blocks=1 00:04:57.036 00:04:57.036 ' 00:04:57.036 13:39:10 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.036 --rc genhtml_branch_coverage=1 00:04:57.036 --rc genhtml_function_coverage=1 00:04:57.036 --rc genhtml_legend=1 00:04:57.036 --rc geninfo_all_blocks=1 00:04:57.036 --rc geninfo_unexecuted_blocks=1 00:04:57.036 00:04:57.036 ' 00:04:57.036 13:39:10 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.036 --rc genhtml_branch_coverage=1 00:04:57.036 --rc genhtml_function_coverage=1 00:04:57.036 --rc genhtml_legend=1 00:04:57.036 --rc geninfo_all_blocks=1 00:04:57.036 --rc geninfo_unexecuted_blocks=1 00:04:57.036 00:04:57.036 ' 00:04:57.036 13:39:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:57.036 13:39:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:57.036 13:39:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.036 13:39:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.036 13:39:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cc7ab2e-d7fb-4e4a-87be-c6e45d97844f 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5cc7ab2e-d7fb-4e4a-87be-c6e45d97844f 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:57.037 13:39:10 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:57.037 13:39:10 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.037 13:39:10 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.037 13:39:10 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.037 13:39:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.037 13:39:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.037 13:39:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.037 13:39:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:57.037 13:39:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:57.037 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:57.037 13:39:10 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:57.037 13:39:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:57.037 13:39:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:57.037 13:39:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:57.037 13:39:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:57.037 13:39:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:57.037 13:39:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:57.037 13:39:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:57.037 13:39:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:57.037 13:39:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:57.037 13:39:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:57.037 13:39:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:57.037 INFO: launching applications... 00:04:57.037 13:39:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:57.037 13:39:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:57.037 13:39:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:57.037 13:39:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:57.037 13:39:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:57.037 13:39:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:57.037 13:39:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.037 13:39:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.037 13:39:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57807 00:04:57.037 13:39:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:57.037 Waiting for target to run... 00:04:57.037 13:39:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57807 /var/tmp/spdk_tgt.sock 00:04:57.037 13:39:10 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57807 ']' 00:04:57.037 13:39:10 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:57.037 13:39:10 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.037 13:39:10 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:57.037 13:39:10 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:57.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:57.037 13:39:10 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.037 13:39:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:57.037 [2024-10-15 13:39:10.676903] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:04:57.037 [2024-10-15 13:39:10.677023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57807 ] 00:04:57.298 [2024-10-15 13:39:10.988015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.559 [2024-10-15 13:39:11.085366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.132 00:04:58.132 INFO: shutting down applications... 00:04:58.132 13:39:11 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.132 13:39:11 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:58.132 13:39:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:58.132 13:39:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:58.132 13:39:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:58.132 13:39:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:58.132 13:39:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:58.132 13:39:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57807 ]] 00:04:58.132 13:39:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57807 00:04:58.132 13:39:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:58.132 13:39:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.132 13:39:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57807 00:04:58.132 13:39:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.394 13:39:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.394 13:39:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.394 13:39:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57807 00:04:58.394 13:39:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.966 13:39:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.966 13:39:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.966 13:39:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57807 00:04:58.966 13:39:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.537 13:39:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.537 13:39:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.537 13:39:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57807 00:04:59.537 13:39:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.144 13:39:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.144 13:39:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.144 13:39:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57807 00:05:00.144 13:39:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:00.144 SPDK target shutdown done 00:05:00.144 13:39:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:00.144 13:39:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:00.144 13:39:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:00.144 Success 00:05:00.145 13:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:00.145 00:05:00.145 real 0m3.177s 00:05:00.145 user 0m2.779s 00:05:00.145 sys 0m0.422s 00:05:00.145 13:39:13 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.145 ************************************ 00:05:00.145 13:39:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:00.145 END TEST json_config_extra_key 00:05:00.145 ************************************ 00:05:00.145 13:39:13 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.145 13:39:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.145 13:39:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.145 13:39:13 -- common/autotest_common.sh@10 -- # set +x 00:05:00.145 ************************************ 00:05:00.145 START TEST alias_rpc 00:05:00.145 ************************************ 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.145 * Looking for test storage... 00:05:00.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.145 13:39:13 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:00.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.145 --rc genhtml_branch_coverage=1 00:05:00.145 --rc genhtml_function_coverage=1 00:05:00.145 --rc genhtml_legend=1 00:05:00.145 --rc geninfo_all_blocks=1 00:05:00.145 --rc geninfo_unexecuted_blocks=1 00:05:00.145 00:05:00.145 ' 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:00.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.145 --rc genhtml_branch_coverage=1 00:05:00.145 --rc genhtml_function_coverage=1 00:05:00.145 --rc genhtml_legend=1 00:05:00.145 --rc geninfo_all_blocks=1 00:05:00.145 --rc geninfo_unexecuted_blocks=1 00:05:00.145 00:05:00.145 ' 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:00.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.145 --rc genhtml_branch_coverage=1 00:05:00.145 --rc genhtml_function_coverage=1 00:05:00.145 --rc genhtml_legend=1 00:05:00.145 --rc geninfo_all_blocks=1 00:05:00.145 --rc geninfo_unexecuted_blocks=1 00:05:00.145 00:05:00.145 ' 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:00.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.145 --rc genhtml_branch_coverage=1 00:05:00.145 --rc genhtml_function_coverage=1 00:05:00.145 --rc genhtml_legend=1 00:05:00.145 --rc geninfo_all_blocks=1 00:05:00.145 --rc geninfo_unexecuted_blocks=1 00:05:00.145 00:05:00.145 ' 00:05:00.145 13:39:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:00.145 13:39:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57900 00:05:00.145 13:39:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57900 00:05:00.145 13:39:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57900 ']' 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.145 13:39:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.406 [2024-10-15 13:39:13.945066] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:00.406 [2024-10-15 13:39:13.945256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57900 ] 00:05:00.406 [2024-10-15 13:39:14.112721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.667 [2024-10-15 13:39:14.236907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.240 13:39:14 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.240 13:39:14 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:01.240 13:39:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:01.498 13:39:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57900 00:05:01.498 13:39:15 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57900 ']' 00:05:01.498 13:39:15 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57900 00:05:01.498 13:39:15 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:01.498 13:39:15 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.498 13:39:15 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57900 00:05:01.498 13:39:15 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.498 13:39:15 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.498 13:39:15 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57900' 00:05:01.498 killing process with pid 57900 00:05:01.498 13:39:15 alias_rpc -- common/autotest_common.sh@969 -- # kill 57900 00:05:01.498 13:39:15 alias_rpc -- common/autotest_common.sh@974 -- # wait 57900 00:05:02.873 00:05:02.873 real 0m2.782s 00:05:02.873 user 0m2.750s 00:05:02.873 sys 0m0.516s 00:05:02.873 13:39:16 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.873 ************************************ 00:05:02.873 END TEST alias_rpc 00:05:02.873 ************************************ 00:05:02.873 13:39:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.873 13:39:16 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:02.873 13:39:16 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:02.873 13:39:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.873 13:39:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.873 13:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:02.873 ************************************ 00:05:02.873 START TEST spdkcli_tcp 00:05:02.873 ************************************ 00:05:02.873 13:39:16 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:02.873 * Looking for test storage... 00:05:02.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:02.873 13:39:16 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:02.873 13:39:16 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:02.873 13:39:16 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:02.873 13:39:16 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.873 13:39:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:03.131 13:39:16 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.131 13:39:16 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:03.131 13:39:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:03.131 13:39:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.131 13:39:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:03.131 13:39:16 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.131 13:39:16 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.131 13:39:16 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.131 13:39:16 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:03.131 13:39:16 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.131 13:39:16 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.131 --rc genhtml_branch_coverage=1 00:05:03.131 --rc genhtml_function_coverage=1 00:05:03.131 --rc genhtml_legend=1 00:05:03.131 --rc geninfo_all_blocks=1 00:05:03.131 --rc geninfo_unexecuted_blocks=1 00:05:03.131 00:05:03.131 ' 00:05:03.131 13:39:16 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.131 --rc genhtml_branch_coverage=1 00:05:03.131 --rc genhtml_function_coverage=1 00:05:03.131 --rc genhtml_legend=1 00:05:03.131 --rc geninfo_all_blocks=1 00:05:03.131 --rc geninfo_unexecuted_blocks=1 00:05:03.131 00:05:03.131 ' 00:05:03.131 13:39:16 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.131 --rc genhtml_branch_coverage=1 00:05:03.131 --rc genhtml_function_coverage=1 00:05:03.131 --rc genhtml_legend=1 00:05:03.131 --rc geninfo_all_blocks=1 00:05:03.131 --rc geninfo_unexecuted_blocks=1 00:05:03.131 00:05:03.131 ' 00:05:03.131 13:39:16 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.131 --rc genhtml_branch_coverage=1 00:05:03.131 --rc genhtml_function_coverage=1 00:05:03.131 --rc genhtml_legend=1 00:05:03.131 --rc geninfo_all_blocks=1 00:05:03.131 --rc geninfo_unexecuted_blocks=1 00:05:03.131 00:05:03.131 ' 00:05:03.131 13:39:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:03.131 13:39:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:03.131 13:39:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:03.131 13:39:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:03.131 13:39:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:03.131 13:39:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:03.131 13:39:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:03.131 13:39:16 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.131 13:39:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.131 13:39:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57996 00:05:03.131 13:39:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57996 00:05:03.131 13:39:16 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57996 ']' 00:05:03.131 13:39:16 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.131 13:39:16 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.131 13:39:16 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.131 13:39:16 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.131 13:39:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:03.131 13:39:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.131 [2024-10-15 13:39:16.742186] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:03.131 [2024-10-15 13:39:16.742318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57996 ] 00:05:03.131 [2024-10-15 13:39:16.890606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.389 [2024-10-15 13:39:16.969584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.389 [2024-10-15 13:39:16.969689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.954 13:39:17 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.954 13:39:17 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:03.954 13:39:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58012 00:05:03.954 13:39:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:03.954 13:39:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:04.212 [ 00:05:04.212 "bdev_malloc_delete", 00:05:04.212 "bdev_malloc_create", 00:05:04.212 "bdev_null_resize", 00:05:04.212 "bdev_null_delete", 00:05:04.212 "bdev_null_create", 00:05:04.212 "bdev_nvme_cuse_unregister", 00:05:04.212 "bdev_nvme_cuse_register", 00:05:04.212 "bdev_opal_new_user", 00:05:04.212 "bdev_opal_set_lock_state", 00:05:04.212 "bdev_opal_delete", 00:05:04.212 "bdev_opal_get_info", 00:05:04.212 "bdev_opal_create", 00:05:04.212 "bdev_nvme_opal_revert", 00:05:04.212 "bdev_nvme_opal_init", 00:05:04.212 "bdev_nvme_send_cmd", 00:05:04.212 "bdev_nvme_set_keys", 00:05:04.212 "bdev_nvme_get_path_iostat", 00:05:04.212 "bdev_nvme_get_mdns_discovery_info", 00:05:04.212 "bdev_nvme_stop_mdns_discovery", 00:05:04.212 "bdev_nvme_start_mdns_discovery", 00:05:04.212 "bdev_nvme_set_multipath_policy", 00:05:04.212 "bdev_nvme_set_preferred_path", 00:05:04.212 "bdev_nvme_get_io_paths", 00:05:04.212 "bdev_nvme_remove_error_injection", 00:05:04.212 "bdev_nvme_add_error_injection", 00:05:04.212 "bdev_nvme_get_discovery_info", 00:05:04.212 "bdev_nvme_stop_discovery", 00:05:04.212 "bdev_nvme_start_discovery", 00:05:04.212 "bdev_nvme_get_controller_health_info", 00:05:04.212 "bdev_nvme_disable_controller", 00:05:04.212 "bdev_nvme_enable_controller", 00:05:04.212 "bdev_nvme_reset_controller", 00:05:04.212 "bdev_nvme_get_transport_statistics", 00:05:04.212 "bdev_nvme_apply_firmware", 00:05:04.212 "bdev_nvme_detach_controller", 00:05:04.212 "bdev_nvme_get_controllers", 00:05:04.212 "bdev_nvme_attach_controller", 00:05:04.212 "bdev_nvme_set_hotplug", 00:05:04.212 "bdev_nvme_set_options", 00:05:04.212 "bdev_passthru_delete", 00:05:04.212 "bdev_passthru_create", 00:05:04.212 "bdev_lvol_set_parent_bdev", 00:05:04.212 "bdev_lvol_set_parent", 00:05:04.212 "bdev_lvol_check_shallow_copy", 00:05:04.212 "bdev_lvol_start_shallow_copy", 00:05:04.212 "bdev_lvol_grow_lvstore", 00:05:04.212 "bdev_lvol_get_lvols", 00:05:04.212 "bdev_lvol_get_lvstores", 00:05:04.212 "bdev_lvol_delete", 00:05:04.212 "bdev_lvol_set_read_only", 00:05:04.212 "bdev_lvol_resize", 00:05:04.212 "bdev_lvol_decouple_parent", 00:05:04.212 "bdev_lvol_inflate", 00:05:04.212 "bdev_lvol_rename", 00:05:04.212 "bdev_lvol_clone_bdev", 00:05:04.212 "bdev_lvol_clone", 00:05:04.212 "bdev_lvol_snapshot", 00:05:04.212 "bdev_lvol_create", 00:05:04.212 "bdev_lvol_delete_lvstore", 00:05:04.212 "bdev_lvol_rename_lvstore", 00:05:04.212 "bdev_lvol_create_lvstore", 00:05:04.212 "bdev_raid_set_options", 00:05:04.212 "bdev_raid_remove_base_bdev", 00:05:04.212 "bdev_raid_add_base_bdev", 00:05:04.212 "bdev_raid_delete", 00:05:04.213 "bdev_raid_create", 00:05:04.213 "bdev_raid_get_bdevs", 00:05:04.213 "bdev_error_inject_error", 00:05:04.213 "bdev_error_delete", 00:05:04.213 "bdev_error_create", 00:05:04.213 "bdev_split_delete", 00:05:04.213 "bdev_split_create", 00:05:04.213 "bdev_delay_delete", 00:05:04.213 "bdev_delay_create", 00:05:04.213 "bdev_delay_update_latency", 00:05:04.213 "bdev_zone_block_delete", 00:05:04.213 "bdev_zone_block_create", 00:05:04.213 "blobfs_create", 00:05:04.213 "blobfs_detect", 00:05:04.213 "blobfs_set_cache_size", 00:05:04.213 "bdev_xnvme_delete", 00:05:04.213 "bdev_xnvme_create", 00:05:04.213 "bdev_aio_delete", 00:05:04.213 "bdev_aio_rescan", 00:05:04.213 "bdev_aio_create", 00:05:04.213 "bdev_ftl_set_property", 00:05:04.213 "bdev_ftl_get_properties", 00:05:04.213 "bdev_ftl_get_stats", 00:05:04.213 "bdev_ftl_unmap", 00:05:04.213 "bdev_ftl_unload", 00:05:04.213 "bdev_ftl_delete", 00:05:04.213 "bdev_ftl_load", 00:05:04.213 "bdev_ftl_create", 00:05:04.213 "bdev_virtio_attach_controller", 00:05:04.213 "bdev_virtio_scsi_get_devices", 00:05:04.213 "bdev_virtio_detach_controller", 00:05:04.213 "bdev_virtio_blk_set_hotplug", 00:05:04.213 "bdev_iscsi_delete", 00:05:04.213 "bdev_iscsi_create", 00:05:04.213 "bdev_iscsi_set_options", 00:05:04.213 "accel_error_inject_error", 00:05:04.213 "ioat_scan_accel_module", 00:05:04.213 "dsa_scan_accel_module", 00:05:04.213 "iaa_scan_accel_module", 00:05:04.213 "keyring_file_remove_key", 00:05:04.213 "keyring_file_add_key", 00:05:04.213 "keyring_linux_set_options", 00:05:04.213 "fsdev_aio_delete", 00:05:04.213 "fsdev_aio_create", 00:05:04.213 "iscsi_get_histogram", 00:05:04.213 "iscsi_enable_histogram", 00:05:04.213 "iscsi_set_options", 00:05:04.213 "iscsi_get_auth_groups", 00:05:04.213 "iscsi_auth_group_remove_secret", 00:05:04.213 "iscsi_auth_group_add_secret", 00:05:04.213 "iscsi_delete_auth_group", 00:05:04.213 "iscsi_create_auth_group", 00:05:04.213 "iscsi_set_discovery_auth", 00:05:04.213 "iscsi_get_options", 00:05:04.213 "iscsi_target_node_request_logout", 00:05:04.213 "iscsi_target_node_set_redirect", 00:05:04.213 "iscsi_target_node_set_auth", 00:05:04.213 "iscsi_target_node_add_lun", 00:05:04.213 "iscsi_get_stats", 00:05:04.213 "iscsi_get_connections", 00:05:04.213 "iscsi_portal_group_set_auth", 00:05:04.213 "iscsi_start_portal_group", 00:05:04.213 "iscsi_delete_portal_group", 00:05:04.213 "iscsi_create_portal_group", 00:05:04.213 "iscsi_get_portal_groups", 00:05:04.213 "iscsi_delete_target_node", 00:05:04.213 "iscsi_target_node_remove_pg_ig_maps", 00:05:04.213 "iscsi_target_node_add_pg_ig_maps", 00:05:04.213 "iscsi_create_target_node", 00:05:04.213 "iscsi_get_target_nodes", 00:05:04.213 "iscsi_delete_initiator_group", 00:05:04.213 "iscsi_initiator_group_remove_initiators", 00:05:04.213 "iscsi_initiator_group_add_initiators", 00:05:04.213 "iscsi_create_initiator_group", 00:05:04.213 "iscsi_get_initiator_groups", 00:05:04.213 "nvmf_set_crdt", 00:05:04.213 "nvmf_set_config", 00:05:04.213 "nvmf_set_max_subsystems", 00:05:04.213 "nvmf_stop_mdns_prr", 00:05:04.213 "nvmf_publish_mdns_prr", 00:05:04.213 "nvmf_subsystem_get_listeners", 00:05:04.213 "nvmf_subsystem_get_qpairs", 00:05:04.213 "nvmf_subsystem_get_controllers", 00:05:04.213 "nvmf_get_stats", 00:05:04.213 "nvmf_get_transports", 00:05:04.213 "nvmf_create_transport", 00:05:04.213 "nvmf_get_targets", 00:05:04.213 "nvmf_delete_target", 00:05:04.213 "nvmf_create_target", 00:05:04.213 "nvmf_subsystem_allow_any_host", 00:05:04.213 "nvmf_subsystem_set_keys", 00:05:04.213 "nvmf_subsystem_remove_host", 00:05:04.213 "nvmf_subsystem_add_host", 00:05:04.213 "nvmf_ns_remove_host", 00:05:04.213 "nvmf_ns_add_host", 00:05:04.213 "nvmf_subsystem_remove_ns", 00:05:04.213 "nvmf_subsystem_set_ns_ana_group", 00:05:04.213 "nvmf_subsystem_add_ns", 00:05:04.213 "nvmf_subsystem_listener_set_ana_state", 00:05:04.213 "nvmf_discovery_get_referrals", 00:05:04.213 "nvmf_discovery_remove_referral", 00:05:04.213 "nvmf_discovery_add_referral", 00:05:04.213 "nvmf_subsystem_remove_listener", 00:05:04.213 "nvmf_subsystem_add_listener", 00:05:04.213 "nvmf_delete_subsystem", 00:05:04.213 "nvmf_create_subsystem", 00:05:04.213 "nvmf_get_subsystems", 00:05:04.213 "env_dpdk_get_mem_stats", 00:05:04.213 "nbd_get_disks", 00:05:04.213 "nbd_stop_disk", 00:05:04.213 "nbd_start_disk", 00:05:04.213 "ublk_recover_disk", 00:05:04.213 "ublk_get_disks", 00:05:04.213 "ublk_stop_disk", 00:05:04.213 "ublk_start_disk", 00:05:04.213 "ublk_destroy_target", 00:05:04.213 "ublk_create_target", 00:05:04.213 "virtio_blk_create_transport", 00:05:04.213 "virtio_blk_get_transports", 00:05:04.213 "vhost_controller_set_coalescing", 00:05:04.213 "vhost_get_controllers", 00:05:04.213 "vhost_delete_controller", 00:05:04.213 "vhost_create_blk_controller", 00:05:04.213 "vhost_scsi_controller_remove_target", 00:05:04.213 "vhost_scsi_controller_add_target", 00:05:04.213 "vhost_start_scsi_controller", 00:05:04.213 "vhost_create_scsi_controller", 00:05:04.213 "thread_set_cpumask", 00:05:04.213 "scheduler_set_options", 00:05:04.213 "framework_get_governor", 00:05:04.213 "framework_get_scheduler", 00:05:04.213 "framework_set_scheduler", 00:05:04.213 "framework_get_reactors", 00:05:04.213 "thread_get_io_channels", 00:05:04.213 "thread_get_pollers", 00:05:04.213 "thread_get_stats", 00:05:04.213 "framework_monitor_context_switch", 00:05:04.213 "spdk_kill_instance", 00:05:04.213 "log_enable_timestamps", 00:05:04.213 "log_get_flags", 00:05:04.213 "log_clear_flag", 00:05:04.213 "log_set_flag", 00:05:04.213 "log_get_level", 00:05:04.213 "log_set_level", 00:05:04.213 "log_get_print_level", 00:05:04.213 "log_set_print_level", 00:05:04.213 "framework_enable_cpumask_locks", 00:05:04.213 "framework_disable_cpumask_locks", 00:05:04.213 "framework_wait_init", 00:05:04.213 "framework_start_init", 00:05:04.213 "scsi_get_devices", 00:05:04.213 "bdev_get_histogram", 00:05:04.213 "bdev_enable_histogram", 00:05:04.213 "bdev_set_qos_limit", 00:05:04.213 "bdev_set_qd_sampling_period", 00:05:04.213 "bdev_get_bdevs", 00:05:04.213 "bdev_reset_iostat", 00:05:04.213 "bdev_get_iostat", 00:05:04.213 "bdev_examine", 00:05:04.213 "bdev_wait_for_examine", 00:05:04.213 "bdev_set_options", 00:05:04.213 "accel_get_stats", 00:05:04.213 "accel_set_options", 00:05:04.213 "accel_set_driver", 00:05:04.213 "accel_crypto_key_destroy", 00:05:04.213 "accel_crypto_keys_get", 00:05:04.213 "accel_crypto_key_create", 00:05:04.213 "accel_assign_opc", 00:05:04.213 "accel_get_module_info", 00:05:04.213 "accel_get_opc_assignments", 00:05:04.213 "vmd_rescan", 00:05:04.213 "vmd_remove_device", 00:05:04.213 "vmd_enable", 00:05:04.213 "sock_get_default_impl", 00:05:04.213 "sock_set_default_impl", 00:05:04.213 "sock_impl_set_options", 00:05:04.213 "sock_impl_get_options", 00:05:04.213 "iobuf_get_stats", 00:05:04.213 "iobuf_set_options", 00:05:04.213 "keyring_get_keys", 00:05:04.213 "framework_get_pci_devices", 00:05:04.213 "framework_get_config", 00:05:04.213 "framework_get_subsystems", 00:05:04.213 "fsdev_set_opts", 00:05:04.213 "fsdev_get_opts", 00:05:04.213 "trace_get_info", 00:05:04.213 "trace_get_tpoint_group_mask", 00:05:04.213 "trace_disable_tpoint_group", 00:05:04.213 "trace_enable_tpoint_group", 00:05:04.213 "trace_clear_tpoint_mask", 00:05:04.213 "trace_set_tpoint_mask", 00:05:04.213 "notify_get_notifications", 00:05:04.213 "notify_get_types", 00:05:04.213 "spdk_get_version", 00:05:04.213 "rpc_get_methods" 00:05:04.213 ] 00:05:04.213 13:39:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:04.213 13:39:17 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.213 13:39:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.213 13:39:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:04.213 13:39:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57996 00:05:04.213 13:39:17 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57996 ']' 00:05:04.213 13:39:17 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57996 00:05:04.213 13:39:17 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:04.213 13:39:17 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.213 13:39:17 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57996 00:05:04.213 13:39:17 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.213 13:39:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.213 13:39:17 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57996' 00:05:04.213 killing process with pid 57996 00:05:04.213 13:39:17 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57996 00:05:04.213 13:39:17 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57996 00:05:05.586 00:05:05.586 real 0m2.451s 00:05:05.586 user 0m4.411s 00:05:05.586 sys 0m0.401s 00:05:05.586 ************************************ 00:05:05.586 END TEST spdkcli_tcp 00:05:05.586 ************************************ 00:05:05.587 13:39:18 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.587 13:39:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.587 13:39:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:05.587 13:39:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.587 13:39:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.587 13:39:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.587 ************************************ 00:05:05.587 START TEST dpdk_mem_utility 00:05:05.587 ************************************ 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:05.587 * Looking for test storage... 00:05:05.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.587 13:39:19 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:05.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.587 --rc genhtml_branch_coverage=1 00:05:05.587 --rc genhtml_function_coverage=1 00:05:05.587 --rc genhtml_legend=1 00:05:05.587 --rc geninfo_all_blocks=1 00:05:05.587 --rc geninfo_unexecuted_blocks=1 00:05:05.587 00:05:05.587 ' 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:05.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.587 --rc genhtml_branch_coverage=1 00:05:05.587 --rc genhtml_function_coverage=1 00:05:05.587 --rc genhtml_legend=1 00:05:05.587 --rc geninfo_all_blocks=1 00:05:05.587 --rc geninfo_unexecuted_blocks=1 00:05:05.587 00:05:05.587 ' 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:05.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.587 --rc genhtml_branch_coverage=1 00:05:05.587 --rc genhtml_function_coverage=1 00:05:05.587 --rc genhtml_legend=1 00:05:05.587 --rc geninfo_all_blocks=1 00:05:05.587 --rc geninfo_unexecuted_blocks=1 00:05:05.587 00:05:05.587 ' 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:05.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.587 --rc genhtml_branch_coverage=1 00:05:05.587 --rc genhtml_function_coverage=1 00:05:05.587 --rc genhtml_legend=1 00:05:05.587 --rc geninfo_all_blocks=1 00:05:05.587 --rc geninfo_unexecuted_blocks=1 00:05:05.587 00:05:05.587 ' 00:05:05.587 13:39:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:05.587 13:39:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58102 00:05:05.587 13:39:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58102 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58102 ']' 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.587 13:39:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.587 13:39:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.587 [2024-10-15 13:39:19.229567] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:05.587 [2024-10-15 13:39:19.229663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58102 ] 00:05:05.587 [2024-10-15 13:39:19.371360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.844 [2024-10-15 13:39:19.447678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.414 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.414 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:06.414 13:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:06.414 13:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:06.414 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.414 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:06.414 { 00:05:06.414 "filename": "/tmp/spdk_mem_dump.txt" 00:05:06.414 } 00:05:06.414 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.414 13:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:06.414 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:06.414 1 heaps totaling size 816.000000 MiB 00:05:06.414 size: 816.000000 MiB heap id: 0 00:05:06.414 end heaps---------- 00:05:06.414 9 mempools totaling size 595.772034 MiB 00:05:06.414 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:06.414 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:06.414 size: 92.545471 MiB name: bdev_io_58102 00:05:06.414 size: 50.003479 MiB name: msgpool_58102 00:05:06.414 size: 36.509338 MiB name: fsdev_io_58102 00:05:06.414 size: 21.763794 MiB name: PDU_Pool 00:05:06.414 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:06.414 size: 4.133484 MiB name: evtpool_58102 00:05:06.414 size: 0.026123 MiB name: Session_Pool 00:05:06.414 end mempools------- 00:05:06.414 6 memzones totaling size 4.142822 MiB 00:05:06.414 size: 1.000366 MiB name: RG_ring_0_58102 00:05:06.414 size: 1.000366 MiB name: RG_ring_1_58102 00:05:06.414 size: 1.000366 MiB name: RG_ring_4_58102 00:05:06.414 size: 1.000366 MiB name: RG_ring_5_58102 00:05:06.414 size: 0.125366 MiB name: RG_ring_2_58102 00:05:06.414 size: 0.015991 MiB name: RG_ring_3_58102 00:05:06.414 end memzones------- 00:05:06.414 13:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:06.414 heap id: 0 total size: 816.000000 MiB number of busy elements: 320 number of free elements: 18 00:05:06.414 list of free elements. size: 16.790161 MiB 00:05:06.414 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:06.414 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:06.414 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:06.414 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:06.414 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:06.414 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:06.414 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:06.414 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:06.414 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:06.414 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:06.414 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:06.414 element at address: 0x20001ac00000 with size: 0.559509 MiB 00:05:06.414 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:06.414 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:06.414 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:06.414 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:06.414 element at address: 0x200028000000 with size: 0.391663 MiB 00:05:06.414 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:06.414 list of standard malloc elements. size: 199.288940 MiB 00:05:06.414 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:06.414 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:06.414 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:06.414 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:06.414 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:06.414 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:06.414 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:06.414 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:06.414 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:06.414 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:06.414 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:06.414 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:06.414 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200028064440 with size: 0.000244 MiB 00:05:06.415 element at address: 0x200028064540 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806b200 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:06.415 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:06.416 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:06.416 list of memzone associated elements. size: 599.920898 MiB 00:05:06.416 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:06.416 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:06.416 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:06.416 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:06.416 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:06.416 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58102_0 00:05:06.416 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:06.416 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58102_0 00:05:06.416 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:06.416 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58102_0 00:05:06.416 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:06.416 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:06.416 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:06.416 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:06.416 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:06.416 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58102_0 00:05:06.416 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:06.416 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58102 00:05:06.416 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:06.416 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58102 00:05:06.416 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:06.416 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:06.416 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:06.416 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:06.416 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:06.416 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:06.416 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:06.416 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:06.416 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:06.416 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58102 00:05:06.416 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:06.416 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58102 00:05:06.416 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:06.416 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58102 00:05:06.416 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:06.416 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58102 00:05:06.416 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:06.416 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58102 00:05:06.416 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:06.416 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58102 00:05:06.416 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:06.416 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:06.416 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:06.416 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:06.416 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:06.416 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:06.416 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:06.416 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58102 00:05:06.416 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:06.416 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58102 00:05:06.416 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:06.416 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:06.416 element at address: 0x200028064640 with size: 0.023804 MiB 00:05:06.416 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:06.416 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:06.416 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58102 00:05:06.416 element at address: 0x20002806a7c0 with size: 0.002502 MiB 00:05:06.416 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:06.416 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:06.416 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58102 00:05:06.416 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:06.416 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58102 00:05:06.416 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:06.416 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58102 00:05:06.416 element at address: 0x20002806b300 with size: 0.000366 MiB 00:05:06.416 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:06.416 13:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:06.416 13:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58102 00:05:06.416 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58102 ']' 00:05:06.416 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58102 00:05:06.416 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:06.416 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.416 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58102 00:05:06.416 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:06.416 killing process with pid 58102 00:05:06.416 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:06.416 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58102' 00:05:06.416 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58102 00:05:06.416 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58102 00:05:07.792 00:05:07.792 real 0m2.338s 00:05:07.792 user 0m2.382s 00:05:07.792 sys 0m0.356s 00:05:07.792 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.792 ************************************ 00:05:07.792 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.792 END TEST dpdk_mem_utility 00:05:07.792 ************************************ 00:05:07.792 13:39:21 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:07.792 13:39:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.792 13:39:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.792 13:39:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.792 ************************************ 00:05:07.792 START TEST event 00:05:07.792 ************************************ 00:05:07.792 13:39:21 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:07.792 * Looking for test storage... 00:05:07.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:07.792 13:39:21 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.792 13:39:21 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.792 13:39:21 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.792 13:39:21 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.792 13:39:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.792 13:39:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.792 13:39:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.792 13:39:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.792 13:39:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.792 13:39:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.792 13:39:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.792 13:39:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.792 13:39:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.792 13:39:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.792 13:39:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.792 13:39:21 event -- scripts/common.sh@344 -- # case "$op" in 00:05:07.792 13:39:21 event -- scripts/common.sh@345 -- # : 1 00:05:07.792 13:39:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.792 13:39:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.792 13:39:21 event -- scripts/common.sh@365 -- # decimal 1 00:05:07.792 13:39:21 event -- scripts/common.sh@353 -- # local d=1 00:05:07.792 13:39:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.792 13:39:21 event -- scripts/common.sh@355 -- # echo 1 00:05:07.792 13:39:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.792 13:39:21 event -- scripts/common.sh@366 -- # decimal 2 00:05:07.792 13:39:21 event -- scripts/common.sh@353 -- # local d=2 00:05:07.792 13:39:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.792 13:39:21 event -- scripts/common.sh@355 -- # echo 2 00:05:07.792 13:39:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.792 13:39:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.792 13:39:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.792 13:39:21 event -- scripts/common.sh@368 -- # return 0 00:05:07.792 13:39:21 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.792 13:39:21 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.792 --rc genhtml_branch_coverage=1 00:05:07.792 --rc genhtml_function_coverage=1 00:05:07.792 --rc genhtml_legend=1 00:05:07.792 --rc geninfo_all_blocks=1 00:05:07.792 --rc geninfo_unexecuted_blocks=1 00:05:07.792 00:05:07.792 ' 00:05:07.792 13:39:21 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.792 --rc genhtml_branch_coverage=1 00:05:07.792 --rc genhtml_function_coverage=1 00:05:07.792 --rc genhtml_legend=1 00:05:07.792 --rc geninfo_all_blocks=1 00:05:07.792 --rc geninfo_unexecuted_blocks=1 00:05:07.792 00:05:07.792 ' 00:05:07.792 13:39:21 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.792 --rc genhtml_branch_coverage=1 00:05:07.792 --rc genhtml_function_coverage=1 00:05:07.792 --rc genhtml_legend=1 00:05:07.792 --rc geninfo_all_blocks=1 00:05:07.792 --rc geninfo_unexecuted_blocks=1 00:05:07.792 00:05:07.792 ' 00:05:07.792 13:39:21 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.792 --rc genhtml_branch_coverage=1 00:05:07.792 --rc genhtml_function_coverage=1 00:05:07.792 --rc genhtml_legend=1 00:05:07.792 --rc geninfo_all_blocks=1 00:05:07.792 --rc geninfo_unexecuted_blocks=1 00:05:07.792 00:05:07.792 ' 00:05:07.792 13:39:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:07.792 13:39:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:07.792 13:39:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.792 13:39:21 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:07.792 13:39:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.792 13:39:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.793 ************************************ 00:05:07.793 START TEST event_perf 00:05:07.793 ************************************ 00:05:07.793 13:39:21 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.053 Running I/O for 1 seconds...[2024-10-15 13:39:21.598909] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:08.053 [2024-10-15 13:39:21.599018] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58188 ] 00:05:08.053 [2024-10-15 13:39:21.746107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.313 [2024-10-15 13:39:21.847595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.313 [2024-10-15 13:39:21.847865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.313 [2024-10-15 13:39:21.848302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.313 [2024-10-15 13:39:21.848531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.248 Running I/O for 1 seconds... 00:05:09.248 lcore 0: 188846 00:05:09.248 lcore 1: 188846 00:05:09.248 lcore 2: 188846 00:05:09.248 lcore 3: 188847 00:05:09.248 done. 00:05:09.248 00:05:09.248 real 0m1.447s 00:05:09.248 user 0m4.254s 00:05:09.248 sys 0m0.073s 00:05:09.248 13:39:23 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.248 13:39:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.248 ************************************ 00:05:09.248 END TEST event_perf 00:05:09.248 ************************************ 00:05:09.507 13:39:23 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:09.507 13:39:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:09.507 13:39:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.507 13:39:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.507 ************************************ 00:05:09.507 START TEST event_reactor 00:05:09.507 ************************************ 00:05:09.507 13:39:23 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:09.507 [2024-10-15 13:39:23.099942] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:09.507 [2024-10-15 13:39:23.100052] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58233 ] 00:05:09.507 [2024-10-15 13:39:23.248149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.765 [2024-10-15 13:39:23.348052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.735 test_start 00:05:10.735 oneshot 00:05:10.735 tick 100 00:05:10.735 tick 100 00:05:10.735 tick 250 00:05:10.735 tick 100 00:05:10.735 tick 100 00:05:10.735 tick 100 00:05:10.735 tick 250 00:05:10.735 tick 500 00:05:10.735 tick 100 00:05:10.735 tick 100 00:05:10.735 tick 250 00:05:10.735 tick 100 00:05:10.735 tick 100 00:05:10.735 test_end 00:05:10.735 00:05:10.735 real 0m1.437s 00:05:10.735 user 0m1.259s 00:05:10.735 sys 0m0.069s 00:05:10.735 13:39:24 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.735 13:39:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:10.735 ************************************ 00:05:10.735 END TEST event_reactor 00:05:10.735 ************************************ 00:05:10.992 13:39:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.992 13:39:24 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:10.992 13:39:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.992 13:39:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.992 ************************************ 00:05:10.992 START TEST event_reactor_perf 00:05:10.992 ************************************ 00:05:10.992 13:39:24 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.992 [2024-10-15 13:39:24.604344] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:10.992 [2024-10-15 13:39:24.604451] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58264 ] 00:05:10.992 [2024-10-15 13:39:24.753565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.251 [2024-10-15 13:39:24.850595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.185 test_start 00:05:12.185 test_end 00:05:12.185 Performance: 312550 events per second 00:05:12.185 00:05:12.185 real 0m1.398s 00:05:12.185 user 0m1.223s 00:05:12.185 sys 0m0.067s 00:05:12.443 ************************************ 00:05:12.443 END TEST event_reactor_perf 00:05:12.443 ************************************ 00:05:12.443 13:39:25 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.443 13:39:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:12.443 13:39:26 event -- event/event.sh@49 -- # uname -s 00:05:12.443 13:39:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:12.443 13:39:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:12.443 13:39:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.443 13:39:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.443 13:39:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.443 ************************************ 00:05:12.443 START TEST event_scheduler 00:05:12.443 ************************************ 00:05:12.443 13:39:26 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:12.443 * Looking for test storage... 00:05:12.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:12.443 13:39:26 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:12.443 13:39:26 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:12.443 13:39:26 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.443 13:39:26 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.443 13:39:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.444 13:39:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:12.444 13:39:26 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.444 13:39:26 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.444 --rc genhtml_branch_coverage=1 00:05:12.444 --rc genhtml_function_coverage=1 00:05:12.444 --rc genhtml_legend=1 00:05:12.444 --rc geninfo_all_blocks=1 00:05:12.444 --rc geninfo_unexecuted_blocks=1 00:05:12.444 00:05:12.444 ' 00:05:12.444 13:39:26 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.444 --rc genhtml_branch_coverage=1 00:05:12.444 --rc genhtml_function_coverage=1 00:05:12.444 --rc genhtml_legend=1 00:05:12.444 --rc geninfo_all_blocks=1 00:05:12.444 --rc geninfo_unexecuted_blocks=1 00:05:12.444 00:05:12.444 ' 00:05:12.444 13:39:26 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.444 --rc genhtml_branch_coverage=1 00:05:12.444 --rc genhtml_function_coverage=1 00:05:12.444 --rc genhtml_legend=1 00:05:12.444 --rc geninfo_all_blocks=1 00:05:12.444 --rc geninfo_unexecuted_blocks=1 00:05:12.444 00:05:12.444 ' 00:05:12.444 13:39:26 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.444 --rc genhtml_branch_coverage=1 00:05:12.444 --rc genhtml_function_coverage=1 00:05:12.444 --rc genhtml_legend=1 00:05:12.444 --rc geninfo_all_blocks=1 00:05:12.444 --rc geninfo_unexecuted_blocks=1 00:05:12.444 00:05:12.444 ' 00:05:12.444 13:39:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:12.444 13:39:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58340 00:05:12.444 13:39:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.444 13:39:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58340 00:05:12.444 13:39:26 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58340 ']' 00:05:12.444 13:39:26 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.444 13:39:26 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.444 13:39:26 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.444 13:39:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:12.444 13:39:26 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.444 13:39:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.702 [2024-10-15 13:39:26.233718] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:12.702 [2024-10-15 13:39:26.233838] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58340 ] 00:05:12.702 [2024-10-15 13:39:26.380326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.702 [2024-10-15 13:39:26.481091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.702 [2024-10-15 13:39:26.481411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.702 [2024-10-15 13:39:26.481694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.702 [2024-10-15 13:39:26.481712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.636 13:39:27 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.636 13:39:27 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:13.636 13:39:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:13.636 13:39:27 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.636 13:39:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.636 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.636 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.636 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.636 POWER: Cannot set governor of lcore 0 to performance 00:05:13.636 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.636 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.636 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.636 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.636 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:13.636 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:13.636 POWER: Unable to set Power Management Environment for lcore 0 00:05:13.636 [2024-10-15 13:39:27.083697] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:13.636 [2024-10-15 13:39:27.083730] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:13.636 [2024-10-15 13:39:27.083789] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:13.636 [2024-10-15 13:39:27.083820] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:13.636 [2024-10-15 13:39:27.083840] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:13.636 [2024-10-15 13:39:27.083859] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:13.636 13:39:27 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.636 13:39:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:13.636 13:39:27 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.636 13:39:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.636 [2024-10-15 13:39:27.303018] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:13.636 13:39:27 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.636 13:39:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:13.636 13:39:27 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.636 13:39:27 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.636 13:39:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.636 ************************************ 00:05:13.636 START TEST scheduler_create_thread 00:05:13.636 ************************************ 00:05:13.636 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:13.636 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:13.636 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.636 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.636 2 00:05:13.636 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.636 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:13.636 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.636 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.636 3 00:05:13.636 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.636 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:13.636 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.636 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.637 4 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.637 5 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.637 6 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.637 7 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.637 8 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.637 9 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.637 10 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.637 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.895 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.895 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:13.895 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:13.895 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.895 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.828 13:39:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.828 00:05:14.828 real 0m1.174s 00:05:14.828 user 0m0.014s 00:05:14.828 sys 0m0.006s 00:05:14.828 13:39:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.828 13:39:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.828 ************************************ 00:05:14.828 END TEST scheduler_create_thread 00:05:14.828 ************************************ 00:05:14.828 13:39:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:14.828 13:39:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58340 00:05:14.828 13:39:28 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58340 ']' 00:05:14.828 13:39:28 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58340 00:05:14.828 13:39:28 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:14.828 13:39:28 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.828 13:39:28 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58340 00:05:14.828 13:39:28 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:14.828 13:39:28 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:14.828 killing process with pid 58340 00:05:14.828 13:39:28 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58340' 00:05:14.828 13:39:28 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58340 00:05:14.828 13:39:28 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58340 00:05:15.395 [2024-10-15 13:39:28.972137] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:15.962 00:05:15.962 real 0m3.496s 00:05:15.962 user 0m5.756s 00:05:15.962 sys 0m0.352s 00:05:15.962 ************************************ 00:05:15.962 END TEST event_scheduler 00:05:15.962 ************************************ 00:05:15.962 13:39:29 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.962 13:39:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.962 13:39:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:15.962 13:39:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:15.962 13:39:29 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.962 13:39:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.962 13:39:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.962 ************************************ 00:05:15.962 START TEST app_repeat 00:05:15.962 ************************************ 00:05:15.962 13:39:29 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58424 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58424' 00:05:15.962 Process app_repeat pid: 58424 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.962 spdk_app_start Round 0 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:15.962 13:39:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58424 /var/tmp/spdk-nbd.sock 00:05:15.962 13:39:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58424 ']' 00:05:15.962 13:39:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.962 13:39:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.962 13:39:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.962 13:39:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.962 13:39:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.962 [2024-10-15 13:39:29.628823] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:15.962 [2024-10-15 13:39:29.628907] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58424 ] 00:05:16.221 [2024-10-15 13:39:29.771282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.221 [2024-10-15 13:39:29.850603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.221 [2024-10-15 13:39:29.850677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.790 13:39:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.790 13:39:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:16.790 13:39:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.049 Malloc0 00:05:17.049 13:39:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.307 Malloc1 00:05:17.307 13:39:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.307 13:39:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.565 /dev/nbd0 00:05:17.565 13:39:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.565 13:39:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.565 1+0 records in 00:05:17.565 1+0 records out 00:05:17.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304254 s, 13.5 MB/s 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:17.565 13:39:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:17.565 13:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.565 13:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.565 13:39:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.823 /dev/nbd1 00:05:17.823 13:39:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.823 13:39:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.823 13:39:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:17.823 13:39:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:17.823 13:39:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:17.823 13:39:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:17.823 13:39:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:17.823 13:39:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:17.823 13:39:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:17.823 13:39:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:17.823 13:39:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.823 1+0 records in 00:05:17.823 1+0 records out 00:05:17.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196941 s, 20.8 MB/s 00:05:17.824 13:39:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.824 13:39:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:17.824 13:39:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.824 13:39:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:17.824 13:39:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:17.824 13:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.824 13:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.824 13:39:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.824 13:39:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.824 13:39:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.824 13:39:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.824 { 00:05:17.824 "nbd_device": "/dev/nbd0", 00:05:17.824 "bdev_name": "Malloc0" 00:05:17.824 }, 00:05:17.824 { 00:05:17.824 "nbd_device": "/dev/nbd1", 00:05:17.824 "bdev_name": "Malloc1" 00:05:17.824 } 00:05:17.824 ]' 00:05:17.824 13:39:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.824 13:39:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.824 { 00:05:17.824 "nbd_device": "/dev/nbd0", 00:05:17.824 "bdev_name": "Malloc0" 00:05:17.824 }, 00:05:17.824 { 00:05:17.824 "nbd_device": "/dev/nbd1", 00:05:17.824 "bdev_name": "Malloc1" 00:05:17.824 } 00:05:17.824 ]' 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.082 /dev/nbd1' 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.082 /dev/nbd1' 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.082 256+0 records in 00:05:18.082 256+0 records out 00:05:18.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00764334 s, 137 MB/s 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.082 256+0 records in 00:05:18.082 256+0 records out 00:05:18.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164438 s, 63.8 MB/s 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.082 256+0 records in 00:05:18.082 256+0 records out 00:05:18.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175873 s, 59.6 MB/s 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.082 13:39:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.341 13:39:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.341 13:39:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.341 13:39:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.341 13:39:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.341 13:39:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.341 13:39:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.341 13:39:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.341 13:39:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.341 13:39:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.341 13:39:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.341 13:39:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.341 13:39:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.341 13:39:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.341 13:39:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.341 13:39:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.341 13:39:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.341 13:39:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.341 13:39:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.341 13:39:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.341 13:39:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.341 13:39:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.628 13:39:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.628 13:39:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.628 13:39:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.628 13:39:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.628 13:39:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.628 13:39:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.628 13:39:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.628 13:39:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.628 13:39:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.628 13:39:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.628 13:39:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.628 13:39:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.628 13:39:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.886 13:39:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.453 [2024-10-15 13:39:33.181693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.714 [2024-10-15 13:39:33.251305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.714 [2024-10-15 13:39:33.251474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.714 [2024-10-15 13:39:33.348058] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.714 [2024-10-15 13:39:33.348106] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.258 spdk_app_start Round 1 00:05:22.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.258 13:39:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.258 13:39:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:22.258 13:39:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58424 /var/tmp/spdk-nbd.sock 00:05:22.258 13:39:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58424 ']' 00:05:22.258 13:39:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.258 13:39:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.258 13:39:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.258 13:39:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.258 13:39:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.258 13:39:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.258 13:39:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:22.258 13:39:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.516 Malloc0 00:05:22.516 13:39:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.516 Malloc1 00:05:22.516 13:39:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.516 13:39:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.776 /dev/nbd0 00:05:22.776 13:39:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.776 13:39:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.776 1+0 records in 00:05:22.776 1+0 records out 00:05:22.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248771 s, 16.5 MB/s 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.776 13:39:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.776 13:39:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.776 13:39:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.776 13:39:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.037 /dev/nbd1 00:05:23.037 13:39:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.037 13:39:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.037 1+0 records in 00:05:23.037 1+0 records out 00:05:23.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203928 s, 20.1 MB/s 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:23.037 13:39:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:23.037 13:39:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.037 13:39:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.037 13:39:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.037 13:39:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.037 13:39:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.298 { 00:05:23.298 "nbd_device": "/dev/nbd0", 00:05:23.298 "bdev_name": "Malloc0" 00:05:23.298 }, 00:05:23.298 { 00:05:23.298 "nbd_device": "/dev/nbd1", 00:05:23.298 "bdev_name": "Malloc1" 00:05:23.298 } 00:05:23.298 ]' 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.298 { 00:05:23.298 "nbd_device": "/dev/nbd0", 00:05:23.298 "bdev_name": "Malloc0" 00:05:23.298 }, 00:05:23.298 { 00:05:23.298 "nbd_device": "/dev/nbd1", 00:05:23.298 "bdev_name": "Malloc1" 00:05:23.298 } 00:05:23.298 ]' 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.298 /dev/nbd1' 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.298 /dev/nbd1' 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.298 13:39:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.298 256+0 records in 00:05:23.298 256+0 records out 00:05:23.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00757937 s, 138 MB/s 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.298 256+0 records in 00:05:23.298 256+0 records out 00:05:23.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178939 s, 58.6 MB/s 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.298 256+0 records in 00:05:23.298 256+0 records out 00:05:23.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159578 s, 65.7 MB/s 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.298 13:39:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.556 13:39:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.556 13:39:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.556 13:39:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.556 13:39:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.556 13:39:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.556 13:39:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.556 13:39:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.556 13:39:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.556 13:39:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.557 13:39:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.814 13:39:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.814 13:39:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.814 13:39:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.814 13:39:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.814 13:39:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.814 13:39:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.814 13:39:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.814 13:39:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.814 13:39:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.814 13:39:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.814 13:39:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.073 13:39:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.073 13:39:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.073 13:39:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.073 13:39:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.073 13:39:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.073 13:39:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.073 13:39:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.073 13:39:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.073 13:39:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.073 13:39:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.073 13:39:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.073 13:39:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.073 13:39:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.330 13:39:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.896 [2024-10-15 13:39:38.539809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.896 [2024-10-15 13:39:38.610100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.896 [2024-10-15 13:39:38.610195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.154 [2024-10-15 13:39:38.705408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.154 [2024-10-15 13:39:38.705457] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.684 13:39:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:27.684 spdk_app_start Round 2 00:05:27.684 13:39:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:27.684 13:39:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58424 /var/tmp/spdk-nbd.sock 00:05:27.684 13:39:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58424 ']' 00:05:27.684 13:39:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.684 13:39:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.694 13:39:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.694 13:39:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.694 13:39:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.694 13:39:41 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.694 13:39:41 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:27.694 13:39:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.694 Malloc0 00:05:27.694 13:39:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.952 Malloc1 00:05:27.952 13:39:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.952 13:39:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.210 /dev/nbd0 00:05:28.210 13:39:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.210 13:39:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.210 1+0 records in 00:05:28.210 1+0 records out 00:05:28.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155983 s, 26.3 MB/s 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.210 13:39:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.210 13:39:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.210 13:39:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.210 13:39:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.471 /dev/nbd1 00:05:28.471 13:39:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.471 13:39:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.471 1+0 records in 00:05:28.471 1+0 records out 00:05:28.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259256 s, 15.8 MB/s 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.471 13:39:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.471 13:39:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.471 13:39:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.471 13:39:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.471 13:39:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.471 13:39:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.733 { 00:05:28.733 "nbd_device": "/dev/nbd0", 00:05:28.733 "bdev_name": "Malloc0" 00:05:28.733 }, 00:05:28.733 { 00:05:28.733 "nbd_device": "/dev/nbd1", 00:05:28.733 "bdev_name": "Malloc1" 00:05:28.733 } 00:05:28.733 ]' 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.733 { 00:05:28.733 "nbd_device": "/dev/nbd0", 00:05:28.733 "bdev_name": "Malloc0" 00:05:28.733 }, 00:05:28.733 { 00:05:28.733 "nbd_device": "/dev/nbd1", 00:05:28.733 "bdev_name": "Malloc1" 00:05:28.733 } 00:05:28.733 ]' 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.733 /dev/nbd1' 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.733 /dev/nbd1' 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.733 256+0 records in 00:05:28.733 256+0 records out 00:05:28.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00837135 s, 125 MB/s 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.733 256+0 records in 00:05:28.733 256+0 records out 00:05:28.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151926 s, 69.0 MB/s 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.733 256+0 records in 00:05:28.733 256+0 records out 00:05:28.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169027 s, 62.0 MB/s 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.733 13:39:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.992 13:39:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.992 13:39:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.992 13:39:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.992 13:39:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.992 13:39:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.992 13:39:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.992 13:39:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.992 13:39:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.992 13:39:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.992 13:39:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.251 13:39:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.251 13:39:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.251 13:39:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.251 13:39:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.251 13:39:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.251 13:39:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.251 13:39:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.251 13:39:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.251 13:39:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.251 13:39:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.251 13:39:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.251 13:39:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.251 13:39:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.251 13:39:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.511 13:39:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.511 13:39:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.511 13:39:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.511 13:39:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.511 13:39:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.511 13:39:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.511 13:39:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.511 13:39:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.511 13:39:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.511 13:39:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.772 13:39:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.340 [2024-10-15 13:39:43.893931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.340 [2024-10-15 13:39:43.963353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.340 [2024-10-15 13:39:43.963550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.340 [2024-10-15 13:39:44.065405] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.340 [2024-10-15 13:39:44.065453] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.884 13:39:46 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58424 /var/tmp/spdk-nbd.sock 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58424 ']' 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:32.884 13:39:46 event.app_repeat -- event/event.sh@39 -- # killprocess 58424 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58424 ']' 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58424 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58424 00:05:32.884 killing process with pid 58424 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58424' 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58424 00:05:32.884 13:39:46 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58424 00:05:33.456 spdk_app_start is called in Round 0. 00:05:33.456 Shutdown signal received, stop current app iteration 00:05:33.457 Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 reinitialization... 00:05:33.457 spdk_app_start is called in Round 1. 00:05:33.457 Shutdown signal received, stop current app iteration 00:05:33.457 Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 reinitialization... 00:05:33.457 spdk_app_start is called in Round 2. 00:05:33.457 Shutdown signal received, stop current app iteration 00:05:33.457 Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 reinitialization... 00:05:33.457 spdk_app_start is called in Round 3. 00:05:33.457 Shutdown signal received, stop current app iteration 00:05:33.457 ************************************ 00:05:33.457 END TEST app_repeat 00:05:33.457 ************************************ 00:05:33.457 13:39:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:33.457 13:39:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:33.457 00:05:33.457 real 0m17.491s 00:05:33.457 user 0m38.393s 00:05:33.457 sys 0m2.029s 00:05:33.457 13:39:47 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.457 13:39:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.457 13:39:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:33.457 13:39:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:33.457 13:39:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.457 13:39:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.457 13:39:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.457 ************************************ 00:05:33.457 START TEST cpu_locks 00:05:33.457 ************************************ 00:05:33.457 13:39:47 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:33.457 * Looking for test storage... 00:05:33.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:33.457 13:39:47 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:33.457 13:39:47 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:33.457 13:39:47 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:33.719 13:39:47 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.719 13:39:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:33.720 13:39:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:33.720 13:39:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.720 13:39:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:33.720 13:39:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.720 13:39:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:33.720 13:39:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:33.720 13:39:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.720 13:39:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:33.720 13:39:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.720 13:39:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.720 13:39:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.720 13:39:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:33.720 13:39:47 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.720 13:39:47 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:33.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.720 --rc genhtml_branch_coverage=1 00:05:33.720 --rc genhtml_function_coverage=1 00:05:33.720 --rc genhtml_legend=1 00:05:33.720 --rc geninfo_all_blocks=1 00:05:33.720 --rc geninfo_unexecuted_blocks=1 00:05:33.720 00:05:33.720 ' 00:05:33.720 13:39:47 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:33.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.720 --rc genhtml_branch_coverage=1 00:05:33.720 --rc genhtml_function_coverage=1 00:05:33.720 --rc genhtml_legend=1 00:05:33.720 --rc geninfo_all_blocks=1 00:05:33.720 --rc geninfo_unexecuted_blocks=1 00:05:33.720 00:05:33.720 ' 00:05:33.720 13:39:47 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:33.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.720 --rc genhtml_branch_coverage=1 00:05:33.720 --rc genhtml_function_coverage=1 00:05:33.720 --rc genhtml_legend=1 00:05:33.720 --rc geninfo_all_blocks=1 00:05:33.720 --rc geninfo_unexecuted_blocks=1 00:05:33.720 00:05:33.720 ' 00:05:33.720 13:39:47 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:33.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.720 --rc genhtml_branch_coverage=1 00:05:33.720 --rc genhtml_function_coverage=1 00:05:33.720 --rc genhtml_legend=1 00:05:33.720 --rc geninfo_all_blocks=1 00:05:33.720 --rc geninfo_unexecuted_blocks=1 00:05:33.720 00:05:33.720 ' 00:05:33.720 13:39:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:33.720 13:39:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:33.720 13:39:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:33.720 13:39:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:33.720 13:39:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.720 13:39:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.720 13:39:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.720 ************************************ 00:05:33.720 START TEST default_locks 00:05:33.720 ************************************ 00:05:33.720 13:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:33.720 13:39:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58849 00:05:33.720 13:39:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58849 00:05:33.720 13:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58849 ']' 00:05:33.720 13:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.720 13:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.720 13:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.720 13:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.720 13:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.720 13:39:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.720 [2024-10-15 13:39:47.357749] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:33.720 [2024-10-15 13:39:47.357837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58849 ] 00:05:33.720 [2024-10-15 13:39:47.502183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.982 [2024-10-15 13:39:47.584649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.555 13:39:48 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.555 13:39:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:34.555 13:39:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58849 00:05:34.555 13:39:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58849 00:05:34.555 13:39:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.816 13:39:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58849 00:05:34.816 13:39:48 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58849 ']' 00:05:34.816 13:39:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58849 00:05:34.816 13:39:48 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:34.816 13:39:48 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.816 13:39:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58849 00:05:34.816 13:39:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.816 13:39:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.816 killing process with pid 58849 00:05:34.816 13:39:48 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58849' 00:05:34.816 13:39:48 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58849 00:05:34.816 13:39:48 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58849 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58849 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58849 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58849 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58849 ']' 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.770 ERROR: process (pid: 58849) is no longer running 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.770 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58849) - No such process 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:35.770 ************************************ 00:05:35.770 END TEST default_locks 00:05:35.770 ************************************ 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:35.770 00:05:35.770 real 0m2.249s 00:05:35.770 user 0m2.227s 00:05:35.770 sys 0m0.410s 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.770 13:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.028 13:39:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:36.028 13:39:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.028 13:39:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.028 13:39:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.028 ************************************ 00:05:36.028 START TEST default_locks_via_rpc 00:05:36.028 ************************************ 00:05:36.028 13:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:36.028 13:39:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58912 00:05:36.028 13:39:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58912 00:05:36.028 13:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58912 ']' 00:05:36.028 13:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.028 13:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.028 13:39:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.028 13:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.028 13:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.028 13:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.028 [2024-10-15 13:39:49.666802] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:36.028 [2024-10-15 13:39:49.666917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58912 ] 00:05:36.028 [2024-10-15 13:39:49.814186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.288 [2024-10-15 13:39:49.911369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58912 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58912 00:05:36.855 13:39:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.114 13:39:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58912 00:05:37.114 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58912 ']' 00:05:37.114 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58912 00:05:37.114 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:37.114 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.114 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58912 00:05:37.114 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.114 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.114 killing process with pid 58912 00:05:37.114 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58912' 00:05:37.114 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58912 00:05:37.114 13:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58912 00:05:38.489 00:05:38.489 real 0m2.558s 00:05:38.489 user 0m2.563s 00:05:38.489 sys 0m0.435s 00:05:38.489 13:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.489 ************************************ 00:05:38.489 END TEST default_locks_via_rpc 00:05:38.489 ************************************ 00:05:38.489 13:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.489 13:39:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:38.489 13:39:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.489 13:39:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.489 13:39:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.489 ************************************ 00:05:38.489 START TEST non_locking_app_on_locked_coremask 00:05:38.489 ************************************ 00:05:38.489 13:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:38.489 13:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58965 00:05:38.489 13:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58965 /var/tmp/spdk.sock 00:05:38.489 13:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58965 ']' 00:05:38.489 13:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.489 13:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.489 13:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.489 13:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.489 13:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.489 13:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.749 [2024-10-15 13:39:52.291275] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:38.750 [2024-10-15 13:39:52.291392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58965 ] 00:05:38.750 [2024-10-15 13:39:52.441269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.750 [2024-10-15 13:39:52.535855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.685 13:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.685 13:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:39.685 13:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58981 00:05:39.685 13:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58981 /var/tmp/spdk2.sock 00:05:39.685 13:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58981 ']' 00:05:39.685 13:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.685 13:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.685 13:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.685 13:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:39.685 13:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.685 13:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.685 [2024-10-15 13:39:53.194675] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:39.685 [2024-10-15 13:39:53.194790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58981 ] 00:05:39.685 [2024-10-15 13:39:53.347199] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.685 [2024-10-15 13:39:53.347246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.944 [2024-10-15 13:39:53.545887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.878 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.878 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:40.878 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58965 00:05:40.878 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58965 00:05:40.878 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.136 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58965 00:05:41.136 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58965 ']' 00:05:41.136 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58965 00:05:41.136 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:41.136 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.136 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58965 00:05:41.394 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.394 killing process with pid 58965 00:05:41.394 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.394 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58965' 00:05:41.394 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58965 00:05:41.394 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58965 00:05:43.923 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58981 00:05:43.923 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58981 ']' 00:05:43.923 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58981 00:05:43.924 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:43.924 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.924 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58981 00:05:43.924 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.924 killing process with pid 58981 00:05:43.924 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.924 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58981' 00:05:43.924 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58981 00:05:43.924 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58981 00:05:44.856 00:05:44.856 real 0m6.257s 00:05:44.856 user 0m6.504s 00:05:44.856 sys 0m0.815s 00:05:44.856 13:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.856 13:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.856 ************************************ 00:05:44.856 END TEST non_locking_app_on_locked_coremask 00:05:44.856 ************************************ 00:05:44.856 13:39:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:44.856 13:39:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.856 13:39:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.856 13:39:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.856 ************************************ 00:05:44.856 START TEST locking_app_on_unlocked_coremask 00:05:44.856 ************************************ 00:05:44.856 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:44.856 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59072 00:05:44.856 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59072 /var/tmp/spdk.sock 00:05:44.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.856 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59072 ']' 00:05:44.856 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.856 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.856 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:44.856 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.856 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.856 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.856 [2024-10-15 13:39:58.605723] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:44.856 [2024-10-15 13:39:58.605836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59072 ] 00:05:45.121 [2024-10-15 13:39:58.751344] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.121 [2024-10-15 13:39:58.751381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.121 [2024-10-15 13:39:58.847663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.727 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.727 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:45.727 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:45.727 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59088 00:05:45.727 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59088 /var/tmp/spdk2.sock 00:05:45.727 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59088 ']' 00:05:45.727 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.727 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.727 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.727 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.727 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.727 [2024-10-15 13:39:59.497853] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:45.727 [2024-10-15 13:39:59.497967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59088 ] 00:05:45.984 [2024-10-15 13:39:59.653473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.242 [2024-10-15 13:39:59.855481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.175 13:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.175 13:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:47.175 13:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59088 00:05:47.175 13:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59088 00:05:47.175 13:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.434 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59072 00:05:47.434 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59072 ']' 00:05:47.434 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59072 00:05:47.434 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:47.434 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.434 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59072 00:05:47.434 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.434 killing process with pid 59072 00:05:47.434 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.434 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59072' 00:05:47.434 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59072 00:05:47.434 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59072 00:05:49.963 13:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59088 00:05:49.963 13:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59088 ']' 00:05:49.963 13:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59088 00:05:49.963 13:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:49.963 13:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.963 13:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59088 00:05:49.963 13:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.963 13:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.963 killing process with pid 59088 00:05:49.963 13:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59088' 00:05:49.963 13:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59088 00:05:49.963 13:40:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59088 00:05:51.336 00:05:51.336 real 0m6.210s 00:05:51.336 user 0m6.465s 00:05:51.336 sys 0m0.794s 00:05:51.336 13:40:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.336 ************************************ 00:05:51.336 END TEST locking_app_on_unlocked_coremask 00:05:51.336 13:40:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.336 ************************************ 00:05:51.336 13:40:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:51.336 13:40:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.336 13:40:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.336 13:40:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.336 ************************************ 00:05:51.336 START TEST locking_app_on_locked_coremask 00:05:51.336 ************************************ 00:05:51.336 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:51.336 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59179 00:05:51.336 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59179 /var/tmp/spdk.sock 00:05:51.336 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59179 ']' 00:05:51.336 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.336 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.336 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.336 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.336 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.336 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.336 [2024-10-15 13:40:04.911709] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:51.336 [2024-10-15 13:40:04.911823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59179 ] 00:05:51.336 [2024-10-15 13:40:05.057952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.594 [2024-10-15 13:40:05.154313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59195 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59195 /var/tmp/spdk2.sock 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59195 /var/tmp/spdk2.sock 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59195 /var/tmp/spdk2.sock 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59195 ']' 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.161 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.161 [2024-10-15 13:40:05.799060] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:52.161 [2024-10-15 13:40:05.799176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59195 ] 00:05:52.419 [2024-10-15 13:40:05.951972] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59179 has claimed it. 00:05:52.419 [2024-10-15 13:40:05.952030] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:52.678 ERROR: process (pid: 59195) is no longer running 00:05:52.678 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59195) - No such process 00:05:52.678 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.678 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:52.678 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:52.678 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.678 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.678 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.678 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59179 00:05:52.678 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59179 00:05:52.678 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.937 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59179 00:05:52.937 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59179 ']' 00:05:52.937 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59179 00:05:52.937 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:52.937 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.937 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59179 00:05:52.937 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.937 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.937 killing process with pid 59179 00:05:52.937 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59179' 00:05:52.937 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59179 00:05:52.937 13:40:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59179 00:05:54.311 00:05:54.311 real 0m3.031s 00:05:54.311 user 0m3.255s 00:05:54.311 sys 0m0.535s 00:05:54.311 13:40:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.311 ************************************ 00:05:54.311 END TEST locking_app_on_locked_coremask 00:05:54.311 ************************************ 00:05:54.311 13:40:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.311 13:40:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:54.311 13:40:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.311 13:40:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.311 13:40:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.311 ************************************ 00:05:54.311 START TEST locking_overlapped_coremask 00:05:54.311 ************************************ 00:05:54.311 13:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:54.311 13:40:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59248 00:05:54.311 13:40:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59248 /var/tmp/spdk.sock 00:05:54.311 13:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59248 ']' 00:05:54.311 13:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.311 13:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.311 13:40:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:54.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.311 13:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.311 13:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.311 13:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.311 [2024-10-15 13:40:07.966260] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:54.311 [2024-10-15 13:40:07.966367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59248 ] 00:05:54.568 [2024-10-15 13:40:08.113839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.568 [2024-10-15 13:40:08.211213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.568 [2024-10-15 13:40:08.211276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.568 [2024-10-15 13:40:08.211304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59266 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59266 /var/tmp/spdk2.sock 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59266 /var/tmp/spdk2.sock 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59266 /var/tmp/spdk2.sock 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59266 ']' 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.135 13:40:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.135 [2024-10-15 13:40:08.875659] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:55.135 [2024-10-15 13:40:08.875783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59266 ] 00:05:55.394 [2024-10-15 13:40:09.032268] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59248 has claimed it. 00:05:55.394 [2024-10-15 13:40:09.032325] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.960 ERROR: process (pid: 59266) is no longer running 00:05:55.960 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59266) - No such process 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59248 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59248 ']' 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59248 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59248 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.960 killing process with pid 59248 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59248' 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59248 00:05:55.960 13:40:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59248 00:05:57.362 00:05:57.362 real 0m2.874s 00:05:57.362 user 0m7.850s 00:05:57.362 sys 0m0.418s 00:05:57.362 13:40:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.362 ************************************ 00:05:57.362 END TEST locking_overlapped_coremask 00:05:57.362 ************************************ 00:05:57.362 13:40:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.362 13:40:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:57.362 13:40:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.362 13:40:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.362 13:40:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.362 ************************************ 00:05:57.362 START TEST locking_overlapped_coremask_via_rpc 00:05:57.362 ************************************ 00:05:57.362 13:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:57.362 13:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59319 00:05:57.362 13:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59319 /var/tmp/spdk.sock 00:05:57.362 13:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59319 ']' 00:05:57.362 13:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.362 13:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.362 13:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.362 13:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:57.362 13:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.362 13:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.362 [2024-10-15 13:40:10.908624] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:57.362 [2024-10-15 13:40:10.908746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59319 ] 00:05:57.362 [2024-10-15 13:40:11.055350] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.362 [2024-10-15 13:40:11.055386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.362 [2024-10-15 13:40:11.133697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.362 [2024-10-15 13:40:11.134036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.362 [2024-10-15 13:40:11.134125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.937 13:40:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.937 13:40:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:57.937 13:40:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59337 00:05:57.937 13:40:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:57.937 13:40:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59337 /var/tmp/spdk2.sock 00:05:57.937 13:40:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59337 ']' 00:05:57.937 13:40:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.937 13:40:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.937 13:40:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.937 13:40:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.937 13:40:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.198 [2024-10-15 13:40:11.752892] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:05:58.198 [2024-10-15 13:40:11.753009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59337 ] 00:05:58.198 [2024-10-15 13:40:11.906171] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.198 [2024-10-15 13:40:11.906218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.457 [2024-10-15 13:40:12.106790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.457 [2024-10-15 13:40:12.110314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.457 [2024-10-15 13:40:12.110348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.394 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.395 [2024-10-15 13:40:13.114344] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59319 has claimed it. 00:05:59.395 request: 00:05:59.395 { 00:05:59.395 "method": "framework_enable_cpumask_locks", 00:05:59.395 "req_id": 1 00:05:59.395 } 00:05:59.395 Got JSON-RPC error response 00:05:59.395 response: 00:05:59.395 { 00:05:59.395 "code": -32603, 00:05:59.395 "message": "Failed to claim CPU core: 2" 00:05:59.395 } 00:05:59.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59319 /var/tmp/spdk.sock 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59319 ']' 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.395 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.656 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.656 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:59.656 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59337 /var/tmp/spdk2.sock 00:05:59.656 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59337 ']' 00:05:59.656 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.656 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.656 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.656 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.656 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.917 ************************************ 00:05:59.917 END TEST locking_overlapped_coremask_via_rpc 00:05:59.917 ************************************ 00:05:59.917 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.917 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:59.917 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:59.917 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.917 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.917 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.917 00:05:59.917 real 0m2.654s 00:05:59.917 user 0m0.877s 00:05:59.917 sys 0m0.129s 00:05:59.917 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.917 13:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.917 13:40:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:59.917 13:40:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59319 ]] 00:05:59.917 13:40:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59319 00:05:59.917 13:40:13 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59319 ']' 00:05:59.917 13:40:13 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59319 00:05:59.917 13:40:13 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:59.917 13:40:13 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.917 13:40:13 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59319 00:05:59.917 killing process with pid 59319 00:05:59.917 13:40:13 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.917 13:40:13 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.917 13:40:13 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59319' 00:05:59.917 13:40:13 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59319 00:05:59.917 13:40:13 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59319 00:06:01.303 13:40:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59337 ]] 00:06:01.303 13:40:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59337 00:06:01.303 13:40:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59337 ']' 00:06:01.303 13:40:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59337 00:06:01.303 13:40:14 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:01.303 13:40:14 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.303 13:40:14 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59337 00:06:01.303 killing process with pid 59337 00:06:01.303 13:40:14 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:01.303 13:40:14 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:01.303 13:40:14 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59337' 00:06:01.303 13:40:14 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59337 00:06:01.303 13:40:14 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59337 00:06:02.240 13:40:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.240 13:40:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:02.240 13:40:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59319 ]] 00:06:02.240 13:40:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59319 00:06:02.240 13:40:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59319 ']' 00:06:02.240 Process with pid 59319 is not found 00:06:02.240 Process with pid 59337 is not found 00:06:02.240 13:40:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59319 00:06:02.240 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59319) - No such process 00:06:02.240 13:40:15 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59319 is not found' 00:06:02.240 13:40:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59337 ]] 00:06:02.240 13:40:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59337 00:06:02.240 13:40:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59337 ']' 00:06:02.240 13:40:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59337 00:06:02.240 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59337) - No such process 00:06:02.240 13:40:15 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59337 is not found' 00:06:02.240 13:40:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.240 ************************************ 00:06:02.240 END TEST cpu_locks 00:06:02.240 ************************************ 00:06:02.240 00:06:02.240 real 0m28.793s 00:06:02.240 user 0m48.755s 00:06:02.240 sys 0m4.308s 00:06:02.240 13:40:15 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.240 13:40:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.241 ************************************ 00:06:02.241 END TEST event 00:06:02.241 ************************************ 00:06:02.241 00:06:02.241 real 0m54.553s 00:06:02.241 user 1m39.811s 00:06:02.241 sys 0m7.132s 00:06:02.241 13:40:15 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.241 13:40:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.241 13:40:16 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.241 13:40:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.241 13:40:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.241 13:40:16 -- common/autotest_common.sh@10 -- # set +x 00:06:02.241 ************************************ 00:06:02.241 START TEST thread 00:06:02.241 ************************************ 00:06:02.241 13:40:16 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.499 * Looking for test storage... 00:06:02.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:02.499 13:40:16 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:02.499 13:40:16 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:02.499 13:40:16 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:02.499 13:40:16 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:02.499 13:40:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.499 13:40:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.499 13:40:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.499 13:40:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.499 13:40:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.499 13:40:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.499 13:40:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.499 13:40:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.499 13:40:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.499 13:40:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.499 13:40:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.499 13:40:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:02.499 13:40:16 thread -- scripts/common.sh@345 -- # : 1 00:06:02.499 13:40:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.499 13:40:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.499 13:40:16 thread -- scripts/common.sh@365 -- # decimal 1 00:06:02.499 13:40:16 thread -- scripts/common.sh@353 -- # local d=1 00:06:02.499 13:40:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.499 13:40:16 thread -- scripts/common.sh@355 -- # echo 1 00:06:02.499 13:40:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.499 13:40:16 thread -- scripts/common.sh@366 -- # decimal 2 00:06:02.499 13:40:16 thread -- scripts/common.sh@353 -- # local d=2 00:06:02.499 13:40:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.499 13:40:16 thread -- scripts/common.sh@355 -- # echo 2 00:06:02.499 13:40:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.500 13:40:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.500 13:40:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.500 13:40:16 thread -- scripts/common.sh@368 -- # return 0 00:06:02.500 13:40:16 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.500 13:40:16 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:02.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.500 --rc genhtml_branch_coverage=1 00:06:02.500 --rc genhtml_function_coverage=1 00:06:02.500 --rc genhtml_legend=1 00:06:02.500 --rc geninfo_all_blocks=1 00:06:02.500 --rc geninfo_unexecuted_blocks=1 00:06:02.500 00:06:02.500 ' 00:06:02.500 13:40:16 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:02.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.500 --rc genhtml_branch_coverage=1 00:06:02.500 --rc genhtml_function_coverage=1 00:06:02.500 --rc genhtml_legend=1 00:06:02.500 --rc geninfo_all_blocks=1 00:06:02.500 --rc geninfo_unexecuted_blocks=1 00:06:02.500 00:06:02.500 ' 00:06:02.500 13:40:16 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:02.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.500 --rc genhtml_branch_coverage=1 00:06:02.500 --rc genhtml_function_coverage=1 00:06:02.500 --rc genhtml_legend=1 00:06:02.500 --rc geninfo_all_blocks=1 00:06:02.500 --rc geninfo_unexecuted_blocks=1 00:06:02.500 00:06:02.500 ' 00:06:02.500 13:40:16 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:02.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.500 --rc genhtml_branch_coverage=1 00:06:02.500 --rc genhtml_function_coverage=1 00:06:02.500 --rc genhtml_legend=1 00:06:02.500 --rc geninfo_all_blocks=1 00:06:02.500 --rc geninfo_unexecuted_blocks=1 00:06:02.500 00:06:02.500 ' 00:06:02.500 13:40:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.500 13:40:16 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:02.500 13:40:16 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.500 13:40:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.500 ************************************ 00:06:02.500 START TEST thread_poller_perf 00:06:02.500 ************************************ 00:06:02.500 13:40:16 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.500 [2024-10-15 13:40:16.204667] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:02.500 [2024-10-15 13:40:16.204776] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59492 ] 00:06:02.758 [2024-10-15 13:40:16.356341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.758 [2024-10-15 13:40:16.458607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.758 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:04.141 [2024-10-15T13:40:17.929Z] ====================================== 00:06:04.141 [2024-10-15T13:40:17.929Z] busy:2608462828 (cyc) 00:06:04.141 [2024-10-15T13:40:17.929Z] total_run_count: 305000 00:06:04.141 [2024-10-15T13:40:17.929Z] tsc_hz: 2600000000 (cyc) 00:06:04.141 [2024-10-15T13:40:17.929Z] ====================================== 00:06:04.141 [2024-10-15T13:40:17.929Z] poller_cost: 8552 (cyc), 3289 (nsec) 00:06:04.141 00:06:04.141 real 0m1.451s 00:06:04.141 user 0m1.276s 00:06:04.141 sys 0m0.066s 00:06:04.141 13:40:17 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.141 13:40:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.141 ************************************ 00:06:04.141 END TEST thread_poller_perf 00:06:04.141 ************************************ 00:06:04.141 13:40:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.141 13:40:17 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:04.141 13:40:17 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.141 13:40:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.141 ************************************ 00:06:04.141 START TEST thread_poller_perf 00:06:04.141 ************************************ 00:06:04.141 13:40:17 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.141 [2024-10-15 13:40:17.698541] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:04.141 [2024-10-15 13:40:17.698662] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59528 ] 00:06:04.141 [2024-10-15 13:40:17.849304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.417 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.417 [2024-10-15 13:40:17.949879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.352 [2024-10-15T13:40:19.140Z] ====================================== 00:06:05.352 [2024-10-15T13:40:19.140Z] busy:2603269010 (cyc) 00:06:05.352 [2024-10-15T13:40:19.140Z] total_run_count: 4381000 00:06:05.352 [2024-10-15T13:40:19.140Z] tsc_hz: 2600000000 (cyc) 00:06:05.352 [2024-10-15T13:40:19.140Z] ====================================== 00:06:05.352 [2024-10-15T13:40:19.140Z] poller_cost: 594 (cyc), 228 (nsec) 00:06:05.352 00:06:05.352 real 0m1.408s 00:06:05.352 user 0m1.230s 00:06:05.352 sys 0m0.070s 00:06:05.352 ************************************ 00:06:05.352 END TEST thread_poller_perf 00:06:05.352 ************************************ 00:06:05.352 13:40:19 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.352 13:40:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.352 13:40:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:05.352 00:06:05.352 real 0m3.096s 00:06:05.352 user 0m2.605s 00:06:05.352 sys 0m0.267s 00:06:05.352 ************************************ 00:06:05.352 END TEST thread 00:06:05.352 ************************************ 00:06:05.352 13:40:19 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.352 13:40:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.611 13:40:19 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:05.611 13:40:19 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:05.611 13:40:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.611 13:40:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.611 13:40:19 -- common/autotest_common.sh@10 -- # set +x 00:06:05.611 ************************************ 00:06:05.611 START TEST app_cmdline 00:06:05.611 ************************************ 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:05.611 * Looking for test storage... 00:06:05.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.611 13:40:19 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.611 --rc genhtml_branch_coverage=1 00:06:05.611 --rc genhtml_function_coverage=1 00:06:05.611 --rc genhtml_legend=1 00:06:05.611 --rc geninfo_all_blocks=1 00:06:05.611 --rc geninfo_unexecuted_blocks=1 00:06:05.611 00:06:05.611 ' 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.611 --rc genhtml_branch_coverage=1 00:06:05.611 --rc genhtml_function_coverage=1 00:06:05.611 --rc genhtml_legend=1 00:06:05.611 --rc geninfo_all_blocks=1 00:06:05.611 --rc geninfo_unexecuted_blocks=1 00:06:05.611 00:06:05.611 ' 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.611 --rc genhtml_branch_coverage=1 00:06:05.611 --rc genhtml_function_coverage=1 00:06:05.611 --rc genhtml_legend=1 00:06:05.611 --rc geninfo_all_blocks=1 00:06:05.611 --rc geninfo_unexecuted_blocks=1 00:06:05.611 00:06:05.611 ' 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.611 --rc genhtml_branch_coverage=1 00:06:05.611 --rc genhtml_function_coverage=1 00:06:05.611 --rc genhtml_legend=1 00:06:05.611 --rc geninfo_all_blocks=1 00:06:05.611 --rc geninfo_unexecuted_blocks=1 00:06:05.611 00:06:05.611 ' 00:06:05.611 13:40:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:05.611 13:40:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59612 00:06:05.611 13:40:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59612 00:06:05.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59612 ']' 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.611 13:40:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.611 13:40:19 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:05.611 [2024-10-15 13:40:19.362424] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:05.611 [2024-10-15 13:40:19.362525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59612 ] 00:06:05.870 [2024-10-15 13:40:19.508558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.870 [2024-10-15 13:40:19.606879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.436 13:40:20 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.436 13:40:20 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:06.436 13:40:20 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:06.695 { 00:06:06.695 "version": "SPDK v25.01-pre git sha1 5a8c76d99", 00:06:06.695 "fields": { 00:06:06.695 "major": 25, 00:06:06.695 "minor": 1, 00:06:06.695 "patch": 0, 00:06:06.695 "suffix": "-pre", 00:06:06.695 "commit": "5a8c76d99" 00:06:06.695 } 00:06:06.695 } 00:06:06.695 13:40:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:06.695 13:40:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:06.695 13:40:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:06.695 13:40:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:06.695 13:40:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.695 13:40:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:06.695 13:40:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.695 13:40:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:06.695 13:40:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:06.695 13:40:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:06.695 13:40:20 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.954 request: 00:06:06.954 { 00:06:06.954 "method": "env_dpdk_get_mem_stats", 00:06:06.954 "req_id": 1 00:06:06.954 } 00:06:06.954 Got JSON-RPC error response 00:06:06.954 response: 00:06:06.954 { 00:06:06.954 "code": -32601, 00:06:06.954 "message": "Method not found" 00:06:06.954 } 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.954 13:40:20 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59612 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59612 ']' 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59612 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59612 00:06:06.954 killing process with pid 59612 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59612' 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@969 -- # kill 59612 00:06:06.954 13:40:20 app_cmdline -- common/autotest_common.sh@974 -- # wait 59612 00:06:08.332 00:06:08.332 real 0m2.855s 00:06:08.332 user 0m3.053s 00:06:08.332 sys 0m0.423s 00:06:08.332 ************************************ 00:06:08.332 END TEST app_cmdline 00:06:08.332 ************************************ 00:06:08.332 13:40:22 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.332 13:40:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.332 13:40:22 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:08.332 13:40:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.332 13:40:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.332 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:06:08.332 ************************************ 00:06:08.332 START TEST version 00:06:08.332 ************************************ 00:06:08.332 13:40:22 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:08.332 * Looking for test storage... 00:06:08.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:08.332 13:40:22 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.332 13:40:22 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.332 13:40:22 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.591 13:40:22 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.591 13:40:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.591 13:40:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.591 13:40:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.591 13:40:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.591 13:40:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.591 13:40:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.591 13:40:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.591 13:40:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.591 13:40:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.591 13:40:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.591 13:40:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.591 13:40:22 version -- scripts/common.sh@344 -- # case "$op" in 00:06:08.591 13:40:22 version -- scripts/common.sh@345 -- # : 1 00:06:08.591 13:40:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.591 13:40:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.591 13:40:22 version -- scripts/common.sh@365 -- # decimal 1 00:06:08.591 13:40:22 version -- scripts/common.sh@353 -- # local d=1 00:06:08.591 13:40:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.591 13:40:22 version -- scripts/common.sh@355 -- # echo 1 00:06:08.591 13:40:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.591 13:40:22 version -- scripts/common.sh@366 -- # decimal 2 00:06:08.591 13:40:22 version -- scripts/common.sh@353 -- # local d=2 00:06:08.591 13:40:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.591 13:40:22 version -- scripts/common.sh@355 -- # echo 2 00:06:08.591 13:40:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.591 13:40:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.591 13:40:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.591 13:40:22 version -- scripts/common.sh@368 -- # return 0 00:06:08.591 13:40:22 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.591 13:40:22 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.591 --rc genhtml_branch_coverage=1 00:06:08.591 --rc genhtml_function_coverage=1 00:06:08.591 --rc genhtml_legend=1 00:06:08.591 --rc geninfo_all_blocks=1 00:06:08.591 --rc geninfo_unexecuted_blocks=1 00:06:08.591 00:06:08.591 ' 00:06:08.591 13:40:22 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.591 --rc genhtml_branch_coverage=1 00:06:08.591 --rc genhtml_function_coverage=1 00:06:08.591 --rc genhtml_legend=1 00:06:08.591 --rc geninfo_all_blocks=1 00:06:08.591 --rc geninfo_unexecuted_blocks=1 00:06:08.591 00:06:08.591 ' 00:06:08.591 13:40:22 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.591 --rc genhtml_branch_coverage=1 00:06:08.591 --rc genhtml_function_coverage=1 00:06:08.591 --rc genhtml_legend=1 00:06:08.591 --rc geninfo_all_blocks=1 00:06:08.591 --rc geninfo_unexecuted_blocks=1 00:06:08.591 00:06:08.591 ' 00:06:08.591 13:40:22 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.591 --rc genhtml_branch_coverage=1 00:06:08.591 --rc genhtml_function_coverage=1 00:06:08.591 --rc genhtml_legend=1 00:06:08.591 --rc geninfo_all_blocks=1 00:06:08.591 --rc geninfo_unexecuted_blocks=1 00:06:08.591 00:06:08.591 ' 00:06:08.591 13:40:22 version -- app/version.sh@17 -- # get_header_version major 00:06:08.591 13:40:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.591 13:40:22 version -- app/version.sh@14 -- # cut -f2 00:06:08.591 13:40:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.591 13:40:22 version -- app/version.sh@17 -- # major=25 00:06:08.591 13:40:22 version -- app/version.sh@18 -- # get_header_version minor 00:06:08.591 13:40:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.591 13:40:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.591 13:40:22 version -- app/version.sh@14 -- # cut -f2 00:06:08.591 13:40:22 version -- app/version.sh@18 -- # minor=1 00:06:08.591 13:40:22 version -- app/version.sh@19 -- # get_header_version patch 00:06:08.591 13:40:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.591 13:40:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.591 13:40:22 version -- app/version.sh@14 -- # cut -f2 00:06:08.591 13:40:22 version -- app/version.sh@19 -- # patch=0 00:06:08.591 13:40:22 version -- app/version.sh@20 -- # get_header_version suffix 00:06:08.591 13:40:22 version -- app/version.sh@14 -- # cut -f2 00:06:08.591 13:40:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.591 13:40:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.591 13:40:22 version -- app/version.sh@20 -- # suffix=-pre 00:06:08.591 13:40:22 version -- app/version.sh@22 -- # version=25.1 00:06:08.591 13:40:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:08.591 13:40:22 version -- app/version.sh@28 -- # version=25.1rc0 00:06:08.591 13:40:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:08.591 13:40:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:08.591 13:40:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:08.591 13:40:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:08.591 00:06:08.591 real 0m0.198s 00:06:08.591 user 0m0.123s 00:06:08.591 sys 0m0.102s 00:06:08.591 13:40:22 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.591 ************************************ 00:06:08.591 END TEST version 00:06:08.591 ************************************ 00:06:08.591 13:40:22 version -- common/autotest_common.sh@10 -- # set +x 00:06:08.591 13:40:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:08.591 13:40:22 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:08.591 13:40:22 -- spdk/autotest.sh@194 -- # uname -s 00:06:08.591 13:40:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:08.591 13:40:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:08.591 13:40:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:08.591 13:40:22 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:08.591 13:40:22 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:08.591 13:40:22 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:08.591 13:40:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.591 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:06:08.591 ************************************ 00:06:08.591 START TEST blockdev_nvme 00:06:08.591 ************************************ 00:06:08.591 13:40:22 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:08.591 * Looking for test storage... 00:06:08.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:08.591 13:40:22 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.591 13:40:22 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.591 13:40:22 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.850 13:40:22 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.850 13:40:22 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:08.850 13:40:22 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.850 13:40:22 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.850 --rc genhtml_branch_coverage=1 00:06:08.850 --rc genhtml_function_coverage=1 00:06:08.850 --rc genhtml_legend=1 00:06:08.850 --rc geninfo_all_blocks=1 00:06:08.850 --rc geninfo_unexecuted_blocks=1 00:06:08.850 00:06:08.850 ' 00:06:08.850 13:40:22 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.850 --rc genhtml_branch_coverage=1 00:06:08.850 --rc genhtml_function_coverage=1 00:06:08.850 --rc genhtml_legend=1 00:06:08.850 --rc geninfo_all_blocks=1 00:06:08.850 --rc geninfo_unexecuted_blocks=1 00:06:08.850 00:06:08.850 ' 00:06:08.850 13:40:22 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.850 --rc genhtml_branch_coverage=1 00:06:08.850 --rc genhtml_function_coverage=1 00:06:08.850 --rc genhtml_legend=1 00:06:08.850 --rc geninfo_all_blocks=1 00:06:08.850 --rc geninfo_unexecuted_blocks=1 00:06:08.850 00:06:08.850 ' 00:06:08.850 13:40:22 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.850 --rc genhtml_branch_coverage=1 00:06:08.850 --rc genhtml_function_coverage=1 00:06:08.850 --rc genhtml_legend=1 00:06:08.850 --rc geninfo_all_blocks=1 00:06:08.850 --rc geninfo_unexecuted_blocks=1 00:06:08.850 00:06:08.850 ' 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:08.850 13:40:22 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:08.850 13:40:22 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:08.851 13:40:22 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:08.851 13:40:22 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:08.851 13:40:22 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:08.851 13:40:22 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:08.851 13:40:22 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:08.851 13:40:22 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:08.851 13:40:22 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:08.851 13:40:22 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59784 00:06:08.851 13:40:22 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:08.851 13:40:22 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59784 00:06:08.851 13:40:22 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 59784 ']' 00:06:08.851 13:40:22 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.851 13:40:22 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.851 13:40:22 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.851 13:40:22 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:08.851 13:40:22 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.851 13:40:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.851 [2024-10-15 13:40:22.514692] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:08.851 [2024-10-15 13:40:22.514813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59784 ] 00:06:09.109 [2024-10-15 13:40:22.665584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.109 [2024-10-15 13:40:22.764132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.675 13:40:23 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.675 13:40:23 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:09.675 13:40:23 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:09.675 13:40:23 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:09.675 13:40:23 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:09.675 13:40:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:09.675 13:40:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:09.675 13:40:23 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:09.675 13:40:23 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.675 13:40:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.934 13:40:23 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.934 13:40:23 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:09.934 13:40:23 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.934 13:40:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.934 13:40:23 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.934 13:40:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:09.934 13:40:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:09.934 13:40:23 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.934 13:40:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.934 13:40:23 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.934 13:40:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:09.934 13:40:23 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.934 13:40:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.193 13:40:23 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.193 13:40:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:10.193 13:40:23 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.193 13:40:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.193 13:40:23 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.193 13:40:23 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:10.193 13:40:23 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:10.193 13:40:23 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:10.193 13:40:23 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.193 13:40:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.193 13:40:23 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.193 13:40:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:10.193 13:40:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:10.194 13:40:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8b6c9785-57d3-4881-8709-3115e66dcd13"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8b6c9785-57d3-4881-8709-3115e66dcd13",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "efe93d18-8e5a-478e-80e8-5bfcd4890722"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "efe93d18-8e5a-478e-80e8-5bfcd4890722",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "7b8ff576-3aff-46ff-a392-7fff1f3ed665"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7b8ff576-3aff-46ff-a392-7fff1f3ed665",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a68e66de-8dc0-467c-b5ec-d74248130779"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a68e66de-8dc0-467c-b5ec-d74248130779",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f95f51c1-ba71-41fd-95fd-6732c9ee5c38"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f95f51c1-ba71-41fd-95fd-6732c9ee5c38",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "633bbc49-1bac-4c5a-a709-a118c5ef6f52"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "633bbc49-1bac-4c5a-a709-a118c5ef6f52",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:10.194 13:40:23 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:10.194 13:40:23 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:10.194 13:40:23 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:10.194 13:40:23 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59784 00:06:10.194 13:40:23 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 59784 ']' 00:06:10.194 13:40:23 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 59784 00:06:10.194 13:40:23 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:10.194 13:40:23 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.194 13:40:23 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59784 00:06:10.194 killing process with pid 59784 00:06:10.194 13:40:23 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.194 13:40:23 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.194 13:40:23 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59784' 00:06:10.194 13:40:23 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 59784 00:06:10.194 13:40:23 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 59784 00:06:11.569 13:40:25 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:11.569 13:40:25 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:11.569 13:40:25 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:11.569 13:40:25 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.569 13:40:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.828 ************************************ 00:06:11.828 START TEST bdev_hello_world 00:06:11.828 ************************************ 00:06:11.828 13:40:25 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:11.828 [2024-10-15 13:40:25.428149] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:11.828 [2024-10-15 13:40:25.428414] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59868 ] 00:06:11.828 [2024-10-15 13:40:25.580049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.088 [2024-10-15 13:40:25.677498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.654 [2024-10-15 13:40:26.214677] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:12.654 [2024-10-15 13:40:26.214866] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:12.654 [2024-10-15 13:40:26.214909] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:12.654 [2024-10-15 13:40:26.217401] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:12.655 [2024-10-15 13:40:26.217979] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:12.655 [2024-10-15 13:40:26.218005] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:12.655 [2024-10-15 13:40:26.218570] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:12.655 00:06:12.655 [2024-10-15 13:40:26.218594] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:13.221 00:06:13.221 real 0m1.526s 00:06:13.221 user 0m1.270s 00:06:13.221 sys 0m0.150s 00:06:13.221 13:40:26 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.221 13:40:26 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:13.221 ************************************ 00:06:13.221 END TEST bdev_hello_world 00:06:13.221 ************************************ 00:06:13.221 13:40:26 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:13.221 13:40:26 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:13.221 13:40:26 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.221 13:40:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.221 ************************************ 00:06:13.221 START TEST bdev_bounds 00:06:13.221 ************************************ 00:06:13.221 13:40:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:13.221 13:40:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59904 00:06:13.221 13:40:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.221 Process bdevio pid: 59904 00:06:13.221 13:40:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59904' 00:06:13.221 13:40:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59904 00:06:13.221 13:40:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:13.221 13:40:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 59904 ']' 00:06:13.221 13:40:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.221 13:40:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.221 13:40:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.221 13:40:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.221 13:40:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:13.480 [2024-10-15 13:40:27.010248] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:13.480 [2024-10-15 13:40:27.010367] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59904 ] 00:06:13.480 [2024-10-15 13:40:27.159482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.480 [2024-10-15 13:40:27.237555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.480 [2024-10-15 13:40:27.237867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.480 [2024-10-15 13:40:27.237889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.417 13:40:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.417 13:40:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:14.417 13:40:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:14.417 I/O targets: 00:06:14.417 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:14.417 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:14.417 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:14.417 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:14.417 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:14.417 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:14.417 00:06:14.417 00:06:14.417 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.417 http://cunit.sourceforge.net/ 00:06:14.417 00:06:14.417 00:06:14.417 Suite: bdevio tests on: Nvme3n1 00:06:14.417 Test: blockdev write read block ...passed 00:06:14.417 Test: blockdev write zeroes read block ...passed 00:06:14.417 Test: blockdev write zeroes read no split ...passed 00:06:14.417 Test: blockdev write zeroes read split ...passed 00:06:14.417 Test: blockdev write zeroes read split partial ...passed 00:06:14.417 Test: blockdev reset ...[2024-10-15 13:40:27.984831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:14.417 [2024-10-15 13:40:27.987361] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:14.418 passed 00:06:14.418 Test: blockdev write read 8 blocks ...passed 00:06:14.418 Test: blockdev write read size > 128k ...passed 00:06:14.418 Test: blockdev write read invalid size ...passed 00:06:14.418 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.418 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.418 Test: blockdev write read max offset ...passed 00:06:14.418 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.418 Test: blockdev writev readv 8 blocks ...passed 00:06:14.418 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.418 Test: blockdev writev readv block ...passed 00:06:14.418 Test: blockdev writev readv size > 128k ...passed 00:06:14.418 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.418 Test: blockdev comparev and writev ...[2024-10-15 13:40:27.995195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb80a000 len:0x1000 00:06:14.418 [2024-10-15 13:40:27.995255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.418 passed 00:06:14.418 Test: blockdev nvme passthru rw ...passed 00:06:14.418 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.418 Test: blockdev nvme admin passthru ...[2024-10-15 13:40:27.996045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.418 [2024-10-15 13:40:27.996074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.418 passed 00:06:14.418 Test: blockdev copy ...passed 00:06:14.418 Suite: bdevio tests on: Nvme2n3 00:06:14.418 Test: blockdev write read block ...passed 00:06:14.418 Test: blockdev write zeroes read block ...passed 00:06:14.418 Test: blockdev write zeroes read no split ...passed 00:06:14.418 Test: blockdev write zeroes read split ...passed 00:06:14.418 Test: blockdev write zeroes read split partial ...passed 00:06:14.418 Test: blockdev reset ...[2024-10-15 13:40:28.053020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:14.418 [2024-10-15 13:40:28.055978] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:14.418 passed 00:06:14.418 Test: blockdev write read 8 blocks ...passed 00:06:14.418 Test: blockdev write read size > 128k ...passed 00:06:14.418 Test: blockdev write read invalid size ...passed 00:06:14.418 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.418 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.418 Test: blockdev write read max offset ...passed 00:06:14.418 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.418 Test: blockdev writev readv 8 blocks ...passed 00:06:14.418 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.418 Test: blockdev writev readv block ...passed 00:06:14.418 Test: blockdev writev readv size > 128k ...passed 00:06:14.418 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.418 Test: blockdev comparev and writev ...[2024-10-15 13:40:28.063469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29f206000 len:0x1000 00:06:14.418 [2024-10-15 13:40:28.063604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.418 passed 00:06:14.418 Test: blockdev nvme passthru rw ...passed 00:06:14.418 Test: blockdev nvme passthru vendor specific ...[2024-10-15 13:40:28.064243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.418 [2024-10-15 13:40:28.064272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.418 passed 00:06:14.418 Test: blockdev nvme admin passthru ...passed 00:06:14.418 Test: blockdev copy ...passed 00:06:14.418 Suite: bdevio tests on: Nvme2n2 00:06:14.418 Test: blockdev write read block ...passed 00:06:14.418 Test: blockdev write zeroes read block ...passed 00:06:14.418 Test: blockdev write zeroes read no split ...passed 00:06:14.418 Test: blockdev write zeroes read split ...passed 00:06:14.418 Test: blockdev write zeroes read split partial ...passed 00:06:14.418 Test: blockdev reset ...[2024-10-15 13:40:28.120764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:14.418 [2024-10-15 13:40:28.123709] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:14.418 passed 00:06:14.418 Test: blockdev write read 8 blocks ...passed 00:06:14.418 Test: blockdev write read size > 128k ...passed 00:06:14.418 Test: blockdev write read invalid size ...passed 00:06:14.418 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.418 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.418 Test: blockdev write read max offset ...passed 00:06:14.418 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.418 Test: blockdev writev readv 8 blocks ...passed 00:06:14.418 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.418 Test: blockdev writev readv block ...passed 00:06:14.418 Test: blockdev writev readv size > 128k ...passed 00:06:14.418 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.418 Test: blockdev comparev and writev ...[2024-10-15 13:40:28.131438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d703c000 len:0x1000 00:06:14.418 [2024-10-15 13:40:28.131475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.418 passed 00:06:14.418 Test: blockdev nvme passthru rw ...passed 00:06:14.418 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.418 Test: blockdev nvme admin passthru ...[2024-10-15 13:40:28.132064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.418 [2024-10-15 13:40:28.132090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.418 passed 00:06:14.418 Test: blockdev copy ...passed 00:06:14.418 Suite: bdevio tests on: Nvme2n1 00:06:14.418 Test: blockdev write read block ...passed 00:06:14.418 Test: blockdev write zeroes read block ...passed 00:06:14.418 Test: blockdev write zeroes read no split ...passed 00:06:14.418 Test: blockdev write zeroes read split ...passed 00:06:14.418 Test: blockdev write zeroes read split partial ...passed 00:06:14.418 Test: blockdev reset ...[2024-10-15 13:40:28.186976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:14.418 [2024-10-15 13:40:28.189669] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:14.418 passed 00:06:14.418 Test: blockdev write read 8 blocks ...passed 00:06:14.418 Test: blockdev write read size > 128k ...passed 00:06:14.418 Test: blockdev write read invalid size ...passed 00:06:14.418 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.418 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.418 Test: blockdev write read max offset ...passed 00:06:14.418 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.418 Test: blockdev writev readv 8 blocks ...passed 00:06:14.418 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.418 Test: blockdev writev readv block ...passed 00:06:14.418 Test: blockdev writev readv size > 128k ...passed 00:06:14.418 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.418 Test: blockdev comparev and writev ...[2024-10-15 13:40:28.198930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7038000 len:0x1000 00:06:14.418 [2024-10-15 13:40:28.199051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.418 passed 00:06:14.418 Test: blockdev nvme passthru rw ...passed 00:06:14.418 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.418 Test: blockdev nvme admin passthru ...[2024-10-15 13:40:28.200243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.418 [2024-10-15 13:40:28.200270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.676 passed 00:06:14.676 Test: blockdev copy ...passed 00:06:14.676 Suite: bdevio tests on: Nvme1n1 00:06:14.676 Test: blockdev write read block ...passed 00:06:14.676 Test: blockdev write zeroes read block ...passed 00:06:14.676 Test: blockdev write zeroes read no split ...passed 00:06:14.676 Test: blockdev write zeroes read split ...passed 00:06:14.676 Test: blockdev write zeroes read split partial ...passed 00:06:14.676 Test: blockdev reset ...[2024-10-15 13:40:28.253040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:14.676 [2024-10-15 13:40:28.255459] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:14.676 passed 00:06:14.676 Test: blockdev write read 8 blocks ...passed 00:06:14.676 Test: blockdev write read size > 128k ...passed 00:06:14.676 Test: blockdev write read invalid size ...passed 00:06:14.676 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.676 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.676 Test: blockdev write read max offset ...passed 00:06:14.676 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.676 Test: blockdev writev readv 8 blocks ...passed 00:06:14.676 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.676 Test: blockdev writev readv block ...passed 00:06:14.676 Test: blockdev writev readv size > 128k ...passed 00:06:14.676 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.676 Test: blockdev comparev and writev ...[2024-10-15 13:40:28.263179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7034000 len:0x1000 00:06:14.676 [2024-10-15 13:40:28.263318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.676 passed 00:06:14.676 Test: blockdev nvme passthru rw ...passed 00:06:14.676 Test: blockdev nvme passthru vendor specific ...[2024-10-15 13:40:28.264097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:14.676 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:14.676 [2024-10-15 13:40:28.264177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.676 passed 00:06:14.676 Test: blockdev copy ...passed 00:06:14.676 Suite: bdevio tests on: Nvme0n1 00:06:14.676 Test: blockdev write read block ...passed 00:06:14.676 Test: blockdev write zeroes read block ...passed 00:06:14.676 Test: blockdev write zeroes read no split ...passed 00:06:14.676 Test: blockdev write zeroes read split ...passed 00:06:14.676 Test: blockdev write zeroes read split partial ...passed 00:06:14.676 Test: blockdev reset ...[2024-10-15 13:40:28.320937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:14.676 [2024-10-15 13:40:28.323322] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:14.676 passed 00:06:14.676 Test: blockdev write read 8 blocks ...passed 00:06:14.676 Test: blockdev write read size > 128k ...passed 00:06:14.676 Test: blockdev write read invalid size ...passed 00:06:14.676 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.676 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.676 Test: blockdev write read max offset ...passed 00:06:14.677 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.677 Test: blockdev writev readv 8 blocks ...passed 00:06:14.677 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.677 Test: blockdev writev readv block ...passed 00:06:14.677 Test: blockdev writev readv size > 128k ...passed 00:06:14.677 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.677 Test: blockdev comparev and writev ...passed 00:06:14.677 Test: blockdev nvme passthru rw ...[2024-10-15 13:40:28.329780] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:14.677 separate metadata which is not supported yet. 00:06:14.677 passed 00:06:14.677 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.677 Test: blockdev nvme admin passthru ...[2024-10-15 13:40:28.330350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:14.677 [2024-10-15 13:40:28.330388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:14.677 passed 00:06:14.677 Test: blockdev copy ...passed 00:06:14.677 00:06:14.677 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.677 suites 6 6 n/a 0 0 00:06:14.677 tests 138 138 138 0 0 00:06:14.677 asserts 893 893 893 0 n/a 00:06:14.677 00:06:14.677 Elapsed time = 1.024 seconds 00:06:14.677 0 00:06:14.677 13:40:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59904 00:06:14.677 13:40:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 59904 ']' 00:06:14.677 13:40:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 59904 00:06:14.677 13:40:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:14.677 13:40:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.677 13:40:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59904 00:06:14.677 13:40:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.677 13:40:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.677 13:40:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59904' 00:06:14.677 killing process with pid 59904 00:06:14.677 13:40:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 59904 00:06:14.677 13:40:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 59904 00:06:15.243 13:40:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:15.243 00:06:15.243 real 0m1.950s 00:06:15.243 user 0m5.042s 00:06:15.243 sys 0m0.251s 00:06:15.243 13:40:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.243 13:40:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:15.243 ************************************ 00:06:15.243 END TEST bdev_bounds 00:06:15.243 ************************************ 00:06:15.243 13:40:28 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:15.243 13:40:28 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:15.243 13:40:28 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.243 13:40:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:15.243 ************************************ 00:06:15.243 START TEST bdev_nbd 00:06:15.243 ************************************ 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59957 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59957 /var/tmp/spdk-nbd.sock 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 59957 ']' 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:15.243 13:40:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:15.243 [2024-10-15 13:40:29.021424] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:15.243 [2024-10-15 13:40:29.021517] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.502 [2024-10-15 13:40:29.163096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.502 [2024-10-15 13:40:29.242237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.438 13:40:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.438 1+0 records in 00:06:16.438 1+0 records out 00:06:16.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435416 s, 9.4 MB/s 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.438 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.696 1+0 records in 00:06:16.696 1+0 records out 00:06:16.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109818 s, 3.7 MB/s 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.696 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:16.955 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.956 1+0 records in 00:06:16.956 1+0 records out 00:06:16.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116313 s, 3.5 MB/s 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.956 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.214 1+0 records in 00:06:17.214 1+0 records out 00:06:17.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758432 s, 5.4 MB/s 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.214 13:40:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:17.471 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.472 1+0 records in 00:06:17.472 1+0 records out 00:06:17.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000816305 s, 5.0 MB/s 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.472 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.730 1+0 records in 00:06:17.730 1+0 records out 00:06:17.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107316 s, 3.8 MB/s 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:17.730 { 00:06:17.730 "nbd_device": "/dev/nbd0", 00:06:17.730 "bdev_name": "Nvme0n1" 00:06:17.730 }, 00:06:17.730 { 00:06:17.730 "nbd_device": "/dev/nbd1", 00:06:17.730 "bdev_name": "Nvme1n1" 00:06:17.730 }, 00:06:17.730 { 00:06:17.730 "nbd_device": "/dev/nbd2", 00:06:17.730 "bdev_name": "Nvme2n1" 00:06:17.730 }, 00:06:17.730 { 00:06:17.730 "nbd_device": "/dev/nbd3", 00:06:17.730 "bdev_name": "Nvme2n2" 00:06:17.730 }, 00:06:17.730 { 00:06:17.730 "nbd_device": "/dev/nbd4", 00:06:17.730 "bdev_name": "Nvme2n3" 00:06:17.730 }, 00:06:17.730 { 00:06:17.730 "nbd_device": "/dev/nbd5", 00:06:17.730 "bdev_name": "Nvme3n1" 00:06:17.730 } 00:06:17.730 ]' 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:17.730 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:17.730 { 00:06:17.730 "nbd_device": "/dev/nbd0", 00:06:17.730 "bdev_name": "Nvme0n1" 00:06:17.730 }, 00:06:17.730 { 00:06:17.730 "nbd_device": "/dev/nbd1", 00:06:17.730 "bdev_name": "Nvme1n1" 00:06:17.730 }, 00:06:17.730 { 00:06:17.730 "nbd_device": "/dev/nbd2", 00:06:17.730 "bdev_name": "Nvme2n1" 00:06:17.730 }, 00:06:17.730 { 00:06:17.730 "nbd_device": "/dev/nbd3", 00:06:17.730 "bdev_name": "Nvme2n2" 00:06:17.730 }, 00:06:17.730 { 00:06:17.730 "nbd_device": "/dev/nbd4", 00:06:17.730 "bdev_name": "Nvme2n3" 00:06:17.730 }, 00:06:17.730 { 00:06:17.730 "nbd_device": "/dev/nbd5", 00:06:17.730 "bdev_name": "Nvme3n1" 00:06:17.730 } 00:06:17.730 ]' 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.988 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.246 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.246 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.246 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.246 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.246 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.246 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.246 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.246 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.246 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.246 13:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:18.504 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:18.504 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:18.504 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:18.504 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.504 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.504 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:18.504 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.504 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.504 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.504 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:18.762 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:18.762 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:18.762 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:18.762 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.762 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.762 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:18.762 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.762 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.762 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.762 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.021 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.279 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.279 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.279 13:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.279 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:19.536 /dev/nbd0 00:06:19.536 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.536 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.536 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:19.536 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:19.536 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.536 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.536 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:19.536 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:19.536 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.536 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.536 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.536 1+0 records in 00:06:19.536 1+0 records out 00:06:19.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321406 s, 12.7 MB/s 00:06:19.536 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.536 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:19.537 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.537 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.537 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:19.537 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.537 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.537 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:19.797 /dev/nbd1 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.797 1+0 records in 00:06:19.797 1+0 records out 00:06:19.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000832027 s, 4.9 MB/s 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.797 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:20.056 /dev/nbd10 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.056 1+0 records in 00:06:20.056 1+0 records out 00:06:20.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000784088 s, 5.2 MB/s 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.056 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:20.314 /dev/nbd11 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.314 1+0 records in 00:06:20.314 1+0 records out 00:06:20.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000833168 s, 4.9 MB/s 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.314 13:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:20.572 /dev/nbd12 00:06:20.572 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.573 1+0 records in 00:06:20.573 1+0 records out 00:06:20.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116751 s, 3.5 MB/s 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.573 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:20.831 /dev/nbd13 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.831 1+0 records in 00:06:20.831 1+0 records out 00:06:20.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112319 s, 3.6 MB/s 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.831 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.090 { 00:06:21.090 "nbd_device": "/dev/nbd0", 00:06:21.090 "bdev_name": "Nvme0n1" 00:06:21.090 }, 00:06:21.090 { 00:06:21.090 "nbd_device": "/dev/nbd1", 00:06:21.090 "bdev_name": "Nvme1n1" 00:06:21.090 }, 00:06:21.090 { 00:06:21.090 "nbd_device": "/dev/nbd10", 00:06:21.090 "bdev_name": "Nvme2n1" 00:06:21.090 }, 00:06:21.090 { 00:06:21.090 "nbd_device": "/dev/nbd11", 00:06:21.090 "bdev_name": "Nvme2n2" 00:06:21.090 }, 00:06:21.090 { 00:06:21.090 "nbd_device": "/dev/nbd12", 00:06:21.090 "bdev_name": "Nvme2n3" 00:06:21.090 }, 00:06:21.090 { 00:06:21.090 "nbd_device": "/dev/nbd13", 00:06:21.090 "bdev_name": "Nvme3n1" 00:06:21.090 } 00:06:21.090 ]' 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.090 { 00:06:21.090 "nbd_device": "/dev/nbd0", 00:06:21.090 "bdev_name": "Nvme0n1" 00:06:21.090 }, 00:06:21.090 { 00:06:21.090 "nbd_device": "/dev/nbd1", 00:06:21.090 "bdev_name": "Nvme1n1" 00:06:21.090 }, 00:06:21.090 { 00:06:21.090 "nbd_device": "/dev/nbd10", 00:06:21.090 "bdev_name": "Nvme2n1" 00:06:21.090 }, 00:06:21.090 { 00:06:21.090 "nbd_device": "/dev/nbd11", 00:06:21.090 "bdev_name": "Nvme2n2" 00:06:21.090 }, 00:06:21.090 { 00:06:21.090 "nbd_device": "/dev/nbd12", 00:06:21.090 "bdev_name": "Nvme2n3" 00:06:21.090 }, 00:06:21.090 { 00:06:21.090 "nbd_device": "/dev/nbd13", 00:06:21.090 "bdev_name": "Nvme3n1" 00:06:21.090 } 00:06:21.090 ]' 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.090 /dev/nbd1 00:06:21.090 /dev/nbd10 00:06:21.090 /dev/nbd11 00:06:21.090 /dev/nbd12 00:06:21.090 /dev/nbd13' 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.090 /dev/nbd1 00:06:21.090 /dev/nbd10 00:06:21.090 /dev/nbd11 00:06:21.090 /dev/nbd12 00:06:21.090 /dev/nbd13' 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:21.090 256+0 records in 00:06:21.090 256+0 records out 00:06:21.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00701817 s, 149 MB/s 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.090 256+0 records in 00:06:21.090 256+0 records out 00:06:21.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132713 s, 7.9 MB/s 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.090 13:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.349 256+0 records in 00:06:21.349 256+0 records out 00:06:21.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16374 s, 6.4 MB/s 00:06:21.349 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.349 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:21.607 256+0 records in 00:06:21.607 256+0 records out 00:06:21.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158762 s, 6.6 MB/s 00:06:21.607 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.607 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:21.607 256+0 records in 00:06:21.607 256+0 records out 00:06:21.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10922 s, 9.6 MB/s 00:06:21.607 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.607 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:21.607 256+0 records in 00:06:21.607 256+0 records out 00:06:21.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0933528 s, 11.2 MB/s 00:06:21.607 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.607 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:21.866 256+0 records in 00:06:21.866 256+0 records out 00:06:21.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117668 s, 8.9 MB/s 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.866 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.125 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.125 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.125 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.125 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.125 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.125 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.125 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.125 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.125 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.125 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.383 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.383 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.383 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.383 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.383 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.383 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.383 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.383 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.383 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.383 13:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:22.383 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:22.383 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:22.383 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:22.383 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.383 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.383 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:22.383 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.383 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.383 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.383 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:22.642 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:22.642 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:22.642 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:22.642 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.642 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.642 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:22.642 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.642 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.642 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.642 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:22.899 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:22.899 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:22.899 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:22.899 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.899 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.899 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:22.899 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.899 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.899 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.899 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:23.157 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:23.157 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:23.157 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:23.157 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.157 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.157 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:23.157 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.157 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.157 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.157 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.157 13:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:23.416 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:23.706 malloc_lvol_verify 00:06:23.706 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:23.706 cb223133-dc52-418c-bec0-76f9077bab53 00:06:23.706 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:23.965 3c18eda1-13ee-4b8f-8158-8bcf13751b4d 00:06:23.965 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:24.223 /dev/nbd0 00:06:24.223 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:24.223 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:24.223 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:24.223 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:24.223 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:24.223 mke2fs 1.47.0 (5-Feb-2023) 00:06:24.223 Discarding device blocks: 0/4096 done 00:06:24.223 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:24.223 00:06:24.223 Allocating group tables: 0/1 done 00:06:24.223 Writing inode tables: 0/1 done 00:06:24.223 Creating journal (1024 blocks): done 00:06:24.223 Writing superblocks and filesystem accounting information: 0/1 done 00:06:24.223 00:06:24.223 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:24.223 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.223 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:24.223 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.223 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:24.223 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.223 13:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59957 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 59957 ']' 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 59957 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59957 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.481 killing process with pid 59957 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59957' 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 59957 00:06:24.481 13:40:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 59957 00:06:25.420 13:40:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:25.420 00:06:25.420 real 0m9.950s 00:06:25.421 user 0m14.029s 00:06:25.421 sys 0m3.087s 00:06:25.421 13:40:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.421 13:40:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:25.421 ************************************ 00:06:25.421 END TEST bdev_nbd 00:06:25.421 ************************************ 00:06:25.421 skipping fio tests on NVMe due to multi-ns failures. 00:06:25.421 13:40:38 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:25.421 13:40:38 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:25.421 13:40:38 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:25.421 13:40:38 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:25.421 13:40:38 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:25.421 13:40:38 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:25.421 13:40:38 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.421 13:40:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.421 ************************************ 00:06:25.421 START TEST bdev_verify 00:06:25.421 ************************************ 00:06:25.421 13:40:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:25.421 [2024-10-15 13:40:39.049420] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:25.421 [2024-10-15 13:40:39.049536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60332 ] 00:06:25.421 [2024-10-15 13:40:39.198023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.679 [2024-10-15 13:40:39.295448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.679 [2024-10-15 13:40:39.295531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.245 Running I/O for 5 seconds... 00:06:28.558 19328.00 IOPS, 75.50 MiB/s [2024-10-15T13:40:43.278Z] 20320.00 IOPS, 79.38 MiB/s [2024-10-15T13:40:44.210Z] 19968.00 IOPS, 78.00 MiB/s [2024-10-15T13:40:45.152Z] 20960.00 IOPS, 81.88 MiB/s [2024-10-15T13:40:45.152Z] 21427.20 IOPS, 83.70 MiB/s 00:06:31.364 Latency(us) 00:06:31.364 [2024-10-15T13:40:45.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:31.364 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.364 Verification LBA range: start 0x0 length 0xbd0bd 00:06:31.364 Nvme0n1 : 5.05 1851.76 7.23 0.00 0.00 68819.41 13409.67 83079.48 00:06:31.364 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.364 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:31.364 Nvme0n1 : 5.06 1656.88 6.47 0.00 0.00 76871.16 8721.33 80659.69 00:06:31.364 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.364 Verification LBA range: start 0x0 length 0xa0000 00:06:31.364 Nvme1n1 : 5.05 1851.18 7.23 0.00 0.00 68695.61 15224.52 75416.81 00:06:31.364 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.364 Verification LBA range: start 0xa0000 length 0xa0000 00:06:31.364 Nvme1n1 : 5.08 1664.47 6.50 0.00 0.00 76577.74 12351.02 72190.42 00:06:31.364 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.364 Verification LBA range: start 0x0 length 0x80000 00:06:31.364 Nvme2n1 : 5.09 1861.05 7.27 0.00 0.00 68305.60 11846.89 68157.44 00:06:31.364 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.364 Verification LBA range: start 0x80000 length 0x80000 00:06:31.364 Nvme2n1 : 5.08 1663.97 6.50 0.00 0.00 76389.72 12653.49 67350.84 00:06:31.364 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.364 Verification LBA range: start 0x0 length 0x80000 00:06:31.364 Nvme2n2 : 5.09 1860.56 7.27 0.00 0.00 68131.14 11947.72 62511.26 00:06:31.364 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.364 Verification LBA range: start 0x80000 length 0x80000 00:06:31.364 Nvme2n2 : 5.08 1663.48 6.50 0.00 0.00 76280.73 13308.85 64124.46 00:06:31.364 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.364 Verification LBA range: start 0x0 length 0x80000 00:06:31.365 Nvme2n3 : 5.09 1860.07 7.27 0.00 0.00 68023.57 12098.95 64527.75 00:06:31.365 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.365 Verification LBA range: start 0x80000 length 0x80000 00:06:31.365 Nvme2n3 : 5.08 1662.98 6.50 0.00 0.00 76170.74 12351.02 64124.46 00:06:31.365 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.365 Verification LBA range: start 0x0 length 0x20000 00:06:31.365 Nvme3n1 : 5.09 1859.59 7.26 0.00 0.00 67917.72 12351.02 66544.25 00:06:31.365 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.365 Verification LBA range: start 0x20000 length 0x20000 00:06:31.365 Nvme3n1 : 5.08 1662.55 6.49 0.00 0.00 76017.51 12804.73 68560.74 00:06:31.365 [2024-10-15T13:40:45.153Z] =================================================================================================================== 00:06:31.365 [2024-10-15T13:40:45.153Z] Total : 21118.54 82.49 0.00 0.00 72124.70 8721.33 83079.48 00:06:32.752 00:06:32.752 real 0m7.299s 00:06:32.752 user 0m13.621s 00:06:32.752 sys 0m0.215s 00:06:32.752 13:40:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.752 ************************************ 00:06:32.752 END TEST bdev_verify 00:06:32.752 ************************************ 00:06:32.752 13:40:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:32.752 13:40:46 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:32.752 13:40:46 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:32.752 13:40:46 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.752 13:40:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.752 ************************************ 00:06:32.752 START TEST bdev_verify_big_io 00:06:32.752 ************************************ 00:06:32.752 13:40:46 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:32.752 [2024-10-15 13:40:46.429607] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:32.752 [2024-10-15 13:40:46.429757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60430 ] 00:06:33.014 [2024-10-15 13:40:46.588459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.014 [2024-10-15 13:40:46.752910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.014 [2024-10-15 13:40:46.753010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.959 Running I/O for 5 seconds... 00:06:39.779 923.00 IOPS, 57.69 MiB/s [2024-10-15T13:40:53.825Z] 3047.00 IOPS, 190.44 MiB/s 00:06:40.037 Latency(us) 00:06:40.037 [2024-10-15T13:40:53.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:40.037 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.037 Verification LBA range: start 0x0 length 0xbd0b 00:06:40.037 Nvme0n1 : 5.43 176.87 11.05 0.00 0.00 701796.35 24903.68 780785.82 00:06:40.037 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.037 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:40.037 Nvme0n1 : 5.63 90.92 5.68 0.00 0.00 1331476.28 19156.68 1509949.44 00:06:40.037 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.037 Verification LBA range: start 0x0 length 0xa000 00:06:40.037 Nvme1n1 : 5.44 174.58 10.91 0.00 0.00 698571.79 64527.75 967916.31 00:06:40.037 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.037 Verification LBA range: start 0xa000 length 0xa000 00:06:40.037 Nvme1n1 : 5.73 100.47 6.28 0.00 0.00 1163919.89 19559.98 1200216.22 00:06:40.037 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.037 Verification LBA range: start 0x0 length 0x8000 00:06:40.037 Nvme2n1 : 5.56 172.90 10.81 0.00 0.00 680975.95 76626.71 1142141.24 00:06:40.037 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.037 Verification LBA range: start 0x8000 length 0x8000 00:06:40.037 Nvme2n1 : 5.79 107.32 6.71 0.00 0.00 1035387.54 18551.73 1155046.79 00:06:40.037 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.037 Verification LBA range: start 0x0 length 0x8000 00:06:40.037 Nvme2n2 : 5.69 184.16 11.51 0.00 0.00 634910.13 36095.21 1155046.79 00:06:40.037 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.037 Verification LBA range: start 0x8000 length 0x8000 00:06:40.037 Nvme2n2 : 5.87 127.91 7.99 0.00 0.00 847110.50 13712.15 1180857.90 00:06:40.037 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.037 Verification LBA range: start 0x0 length 0x8000 00:06:40.037 Nvme2n3 : 5.69 192.71 12.04 0.00 0.00 594248.67 4335.46 903388.55 00:06:40.037 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.037 Verification LBA range: start 0x8000 length 0x8000 00:06:40.037 Nvme2n3 : 6.05 186.71 11.67 0.00 0.00 556385.45 6906.49 1258291.20 00:06:40.037 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.037 Verification LBA range: start 0x0 length 0x2000 00:06:40.037 Nvme3n1 : 5.69 203.26 12.70 0.00 0.00 550805.82 1247.70 761427.50 00:06:40.037 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.037 Verification LBA range: start 0x2000 length 0x2000 00:06:40.037 Nvme3n1 : 6.26 317.18 19.82 0.00 0.00 314824.25 100.43 1232480.10 00:06:40.037 [2024-10-15T13:40:53.825Z] =================================================================================================================== 00:06:40.037 [2024-10-15T13:40:53.825Z] Total : 2034.99 127.19 0.00 0.00 666645.31 100.43 1509949.44 00:06:41.935 00:06:41.936 real 0m9.255s 00:06:41.936 user 0m17.297s 00:06:41.936 sys 0m0.382s 00:06:41.936 13:40:55 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.936 ************************************ 00:06:41.936 END TEST bdev_verify_big_io 00:06:41.936 ************************************ 00:06:41.936 13:40:55 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:41.936 13:40:55 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.936 13:40:55 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:41.936 13:40:55 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.936 13:40:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.936 ************************************ 00:06:41.936 START TEST bdev_write_zeroes 00:06:41.936 ************************************ 00:06:41.936 13:40:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.194 [2024-10-15 13:40:55.746329] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:42.194 [2024-10-15 13:40:55.746456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60552 ] 00:06:42.194 [2024-10-15 13:40:55.897548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.454 [2024-10-15 13:40:56.016272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.023 Running I/O for 1 seconds... 00:06:43.959 59075.00 IOPS, 230.76 MiB/s 00:06:43.959 Latency(us) 00:06:43.959 [2024-10-15T13:40:57.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.959 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.959 Nvme0n1 : 1.02 9807.29 38.31 0.00 0.00 13011.12 5091.64 24197.91 00:06:43.959 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.959 Nvme1n1 : 1.03 9788.59 38.24 0.00 0.00 13012.38 8318.03 19963.27 00:06:43.959 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.959 Nvme2n1 : 1.03 9766.55 38.15 0.00 0.00 12995.64 6276.33 19963.27 00:06:43.959 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.959 Nvme2n2 : 1.03 9745.53 38.07 0.00 0.00 13001.67 6175.51 19963.27 00:06:43.959 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.959 Nvme2n3 : 1.03 9734.61 38.03 0.00 0.00 12994.36 5847.83 19358.33 00:06:43.959 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.959 Nvme3n1 : 1.03 9668.50 37.77 0.00 0.00 13065.49 5570.56 19862.45 00:06:43.959 [2024-10-15T13:40:57.747Z] =================================================================================================================== 00:06:43.959 [2024-10-15T13:40:57.747Z] Total : 58511.07 228.56 0.00 0.00 13013.39 5091.64 24197.91 00:06:44.890 00:06:44.891 real 0m2.700s 00:06:44.891 user 0m2.360s 00:06:44.891 sys 0m0.223s 00:06:44.891 13:40:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.891 ************************************ 00:06:44.891 END TEST bdev_write_zeroes 00:06:44.891 ************************************ 00:06:44.891 13:40:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:44.891 13:40:58 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:44.891 13:40:58 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:44.891 13:40:58 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.891 13:40:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.891 ************************************ 00:06:44.891 START TEST bdev_json_nonenclosed 00:06:44.891 ************************************ 00:06:44.891 13:40:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:44.891 [2024-10-15 13:40:58.503929] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:44.891 [2024-10-15 13:40:58.504044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60605 ] 00:06:44.891 [2024-10-15 13:40:58.650514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.148 [2024-10-15 13:40:58.746463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.148 [2024-10-15 13:40:58.746539] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:45.148 [2024-10-15 13:40:58.746555] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:45.148 [2024-10-15 13:40:58.746565] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.148 00:06:45.148 real 0m0.481s 00:06:45.148 user 0m0.288s 00:06:45.148 sys 0m0.089s 00:06:45.148 13:40:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.148 ************************************ 00:06:45.148 13:40:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:45.148 END TEST bdev_json_nonenclosed 00:06:45.148 ************************************ 00:06:45.406 13:40:58 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.406 13:40:58 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:45.406 13:40:58 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.406 13:40:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.406 ************************************ 00:06:45.406 START TEST bdev_json_nonarray 00:06:45.406 ************************************ 00:06:45.406 13:40:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.406 [2024-10-15 13:40:59.050871] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:45.406 [2024-10-15 13:40:59.051019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60625 ] 00:06:45.664 [2024-10-15 13:40:59.198721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.664 [2024-10-15 13:40:59.297180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.664 [2024-10-15 13:40:59.297273] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:45.664 [2024-10-15 13:40:59.297290] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:45.664 [2024-10-15 13:40:59.297300] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.922 00:06:45.922 real 0m0.484s 00:06:45.922 user 0m0.285s 00:06:45.922 sys 0m0.095s 00:06:45.922 13:40:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.922 ************************************ 00:06:45.922 END TEST bdev_json_nonarray 00:06:45.922 ************************************ 00:06:45.922 13:40:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:45.922 13:40:59 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:45.922 13:40:59 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:45.922 13:40:59 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:45.922 13:40:59 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:45.922 13:40:59 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:45.922 13:40:59 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:45.922 13:40:59 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:45.922 13:40:59 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:45.922 13:40:59 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:45.922 13:40:59 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:45.922 13:40:59 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:45.922 00:06:45.922 real 0m37.244s 00:06:45.922 user 0m57.376s 00:06:45.922 sys 0m5.237s 00:06:45.922 13:40:59 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.922 13:40:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.922 ************************************ 00:06:45.922 END TEST blockdev_nvme 00:06:45.922 ************************************ 00:06:45.922 13:40:59 -- spdk/autotest.sh@209 -- # uname -s 00:06:45.922 13:40:59 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:45.922 13:40:59 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:45.922 13:40:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:45.922 13:40:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.922 13:40:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.922 ************************************ 00:06:45.922 START TEST blockdev_nvme_gpt 00:06:45.922 ************************************ 00:06:45.922 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:45.922 * Looking for test storage... 00:06:45.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:45.922 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.922 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.922 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:46.182 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.182 13:40:59 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:46.182 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.182 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:46.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.182 --rc genhtml_branch_coverage=1 00:06:46.182 --rc genhtml_function_coverage=1 00:06:46.182 --rc genhtml_legend=1 00:06:46.182 --rc geninfo_all_blocks=1 00:06:46.182 --rc geninfo_unexecuted_blocks=1 00:06:46.182 00:06:46.182 ' 00:06:46.182 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:46.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.182 --rc genhtml_branch_coverage=1 00:06:46.182 --rc genhtml_function_coverage=1 00:06:46.182 --rc genhtml_legend=1 00:06:46.182 --rc geninfo_all_blocks=1 00:06:46.182 --rc geninfo_unexecuted_blocks=1 00:06:46.182 00:06:46.182 ' 00:06:46.182 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:46.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.182 --rc genhtml_branch_coverage=1 00:06:46.182 --rc genhtml_function_coverage=1 00:06:46.182 --rc genhtml_legend=1 00:06:46.182 --rc geninfo_all_blocks=1 00:06:46.182 --rc geninfo_unexecuted_blocks=1 00:06:46.182 00:06:46.182 ' 00:06:46.182 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:46.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.182 --rc genhtml_branch_coverage=1 00:06:46.182 --rc genhtml_function_coverage=1 00:06:46.182 --rc genhtml_legend=1 00:06:46.182 --rc geninfo_all_blocks=1 00:06:46.182 --rc geninfo_unexecuted_blocks=1 00:06:46.182 00:06:46.182 ' 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60709 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60709 00:06:46.182 13:40:59 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:46.182 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 60709 ']' 00:06:46.182 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.182 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.183 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.183 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.183 13:40:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:46.183 [2024-10-15 13:40:59.820777] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:46.183 [2024-10-15 13:40:59.820895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60709 ] 00:06:46.442 [2024-10-15 13:40:59.972450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.442 [2024-10-15 13:41:00.080941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.014 13:41:00 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.014 13:41:00 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:06:47.014 13:41:00 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:47.014 13:41:00 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:47.014 13:41:00 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:47.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:47.576 Waiting for block devices as requested 00:06:47.576 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:47.576 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:47.837 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:47.837 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.124 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:53.124 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:53.124 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:53.124 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:53.124 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:53.124 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:53.124 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:53.124 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:53.124 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:53.125 BYT; 00:06:53.125 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:53.125 BYT; 00:06:53.125 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:53.125 13:41:06 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:53.125 13:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:54.059 The operation has completed successfully. 00:06:54.059 13:41:07 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:55.050 The operation has completed successfully. 00:06:55.050 13:41:08 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:55.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:55.895 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:55.895 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:55.895 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.153 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.153 13:41:09 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:56.153 13:41:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.153 13:41:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.153 [] 00:06:56.153 13:41:09 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.153 13:41:09 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:56.153 13:41:09 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:56.153 13:41:09 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:56.153 13:41:09 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:56.153 13:41:09 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:56.153 13:41:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.153 13:41:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.412 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.412 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:56.412 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.412 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.412 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.412 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:56.412 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:56.412 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.412 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.412 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.412 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:56.412 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.413 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.413 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.413 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:56.413 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.413 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.413 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.413 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:56.413 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:56.413 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:56.413 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.413 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.671 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.671 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:56.671 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:56.671 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "250978d4-a8ab-4e56-b561-71cb5607dd48"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "250978d4-a8ab-4e56-b561-71cb5607dd48",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "52d2dbec-4aa2-481f-a148-7a593a58f549"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52d2dbec-4aa2-481f-a148-7a593a58f549",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a6ca6757-3571-400c-801a-825d13a6b6f1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a6ca6757-3571-400c-801a-825d13a6b6f1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6c7375dd-8130-48c6-8f22-f93dc97d1626"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6c7375dd-8130-48c6-8f22-f93dc97d1626",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "70862421-3c6b-47d9-ae85-7893a6f4f715"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "70862421-3c6b-47d9-ae85-7893a6f4f715",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:56.671 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:56.671 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:56.671 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:56.671 13:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60709 00:06:56.671 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 60709 ']' 00:06:56.671 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 60709 00:06:56.671 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:06:56.671 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.671 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60709 00:06:56.671 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.671 killing process with pid 60709 00:06:56.671 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.672 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60709' 00:06:56.672 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 60709 00:06:56.672 13:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 60709 00:06:58.578 13:41:11 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:58.578 13:41:11 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:58.578 13:41:11 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:58.578 13:41:11 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.578 13:41:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.578 ************************************ 00:06:58.578 START TEST bdev_hello_world 00:06:58.578 ************************************ 00:06:58.578 13:41:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:58.578 [2024-10-15 13:41:12.012495] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:06:58.578 [2024-10-15 13:41:12.012645] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61337 ] 00:06:58.578 [2024-10-15 13:41:12.166851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.578 [2024-10-15 13:41:12.329817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.523 [2024-10-15 13:41:12.959532] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:59.523 [2024-10-15 13:41:12.959609] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:59.523 [2024-10-15 13:41:12.959636] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:59.523 [2024-10-15 13:41:12.962674] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:59.523 [2024-10-15 13:41:12.964021] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:59.523 [2024-10-15 13:41:12.964088] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:59.523 [2024-10-15 13:41:12.964876] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:59.523 00:06:59.523 [2024-10-15 13:41:12.964923] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:00.092 00:07:00.092 real 0m1.736s 00:07:00.092 user 0m1.361s 00:07:00.092 sys 0m0.263s 00:07:00.092 ************************************ 00:07:00.092 END TEST bdev_hello_world 00:07:00.092 ************************************ 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:00.092 13:41:13 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:00.092 13:41:13 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:00.092 13:41:13 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.092 13:41:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:00.092 ************************************ 00:07:00.092 START TEST bdev_bounds 00:07:00.092 ************************************ 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61374 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.092 Process bdevio pid: 61374 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61374' 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61374 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61374 ']' 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:00.092 13:41:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:00.092 [2024-10-15 13:41:13.789695] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:07:00.092 [2024-10-15 13:41:13.789798] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61374 ] 00:07:00.352 [2024-10-15 13:41:13.938177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.352 [2024-10-15 13:41:14.061127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.352 [2024-10-15 13:41:14.061391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.352 [2024-10-15 13:41:14.061541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.991 13:41:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.991 13:41:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:00.991 13:41:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:00.991 I/O targets: 00:07:00.991 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:00.991 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:00.991 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:00.991 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.991 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.991 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.991 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:00.991 00:07:00.991 00:07:00.991 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.991 http://cunit.sourceforge.net/ 00:07:00.991 00:07:00.991 00:07:00.991 Suite: bdevio tests on: Nvme3n1 00:07:01.253 Test: blockdev write read block ...passed 00:07:01.253 Test: blockdev write zeroes read block ...passed 00:07:01.253 Test: blockdev write zeroes read no split ...passed 00:07:01.253 Test: blockdev write zeroes read split ...passed 00:07:01.253 Test: blockdev write zeroes read split partial ...passed 00:07:01.253 Test: blockdev reset ...[2024-10-15 13:41:14.832971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:01.253 [2024-10-15 13:41:14.838718] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.253 passed 00:07:01.253 Test: blockdev write read 8 blocks ...passed 00:07:01.253 Test: blockdev write read size > 128k ...passed 00:07:01.253 Test: blockdev write read invalid size ...passed 00:07:01.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.253 Test: blockdev write read max offset ...passed 00:07:01.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.253 Test: blockdev writev readv 8 blocks ...passed 00:07:01.253 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.253 Test: blockdev writev readv block ...passed 00:07:01.253 Test: blockdev writev readv size > 128k ...passed 00:07:01.253 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.253 Test: blockdev comparev and writev ...[2024-10-15 13:41:14.861031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9804000 len:0x1000 00:07:01.253 [2024-10-15 13:41:14.861097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.253 passed 00:07:01.253 Test: blockdev nvme passthru rw ...passed 00:07:01.253 Test: blockdev nvme passthru vendor specific ...[2024-10-15 13:41:14.864003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.253 [2024-10-15 13:41:14.864049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.253 passed 00:07:01.253 Test: blockdev nvme admin passthru ...passed 00:07:01.253 Test: blockdev copy ...passed 00:07:01.253 Suite: bdevio tests on: Nvme2n3 00:07:01.253 Test: blockdev write read block ...passed 00:07:01.253 Test: blockdev write zeroes read block ...passed 00:07:01.253 Test: blockdev write zeroes read no split ...passed 00:07:01.253 Test: blockdev write zeroes read split ...passed 00:07:01.253 Test: blockdev write zeroes read split partial ...passed 00:07:01.253 Test: blockdev reset ...[2024-10-15 13:41:14.932518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:01.253 [2024-10-15 13:41:14.938189] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.253 passed 00:07:01.253 Test: blockdev write read 8 blocks ...passed 00:07:01.253 Test: blockdev write read size > 128k ...passed 00:07:01.253 Test: blockdev write read invalid size ...passed 00:07:01.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.253 Test: blockdev write read max offset ...passed 00:07:01.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.253 Test: blockdev writev readv 8 blocks ...passed 00:07:01.253 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.253 Test: blockdev writev readv block ...passed 00:07:01.253 Test: blockdev writev readv size > 128k ...passed 00:07:01.253 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.253 Test: blockdev comparev and writev ...[2024-10-15 13:41:14.958927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9802000 len:0x1000 00:07:01.253 [2024-10-15 13:41:14.958983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.253 passed 00:07:01.253 Test: blockdev nvme passthru rw ...passed 00:07:01.253 Test: blockdev nvme passthru vendor specific ...[2024-10-15 13:41:14.961538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.253 [2024-10-15 13:41:14.961578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.253 passed 00:07:01.253 Test: blockdev nvme admin passthru ...passed 00:07:01.253 Test: blockdev copy ...passed 00:07:01.253 Suite: bdevio tests on: Nvme2n2 00:07:01.253 Test: blockdev write read block ...passed 00:07:01.253 Test: blockdev write zeroes read block ...passed 00:07:01.253 Test: blockdev write zeroes read no split ...passed 00:07:01.253 Test: blockdev write zeroes read split ...passed 00:07:01.253 Test: blockdev write zeroes read split partial ...passed 00:07:01.253 Test: blockdev reset ...[2024-10-15 13:41:15.023103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:01.253 [2024-10-15 13:41:15.027739] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.253 passed 00:07:01.253 Test: blockdev write read 8 blocks ...passed 00:07:01.253 Test: blockdev write read size > 128k ...passed 00:07:01.253 Test: blockdev write read invalid size ...passed 00:07:01.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.253 Test: blockdev write read max offset ...passed 00:07:01.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.253 Test: blockdev writev readv 8 blocks ...passed 00:07:01.253 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.514 Test: blockdev writev readv block ...passed 00:07:01.514 Test: blockdev writev readv size > 128k ...passed 00:07:01.514 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.514 Test: blockdev comparev and writev ...[2024-10-15 13:41:15.047426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4238000 len:0x1000 00:07:01.514 [2024-10-15 13:41:15.047488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.514 passed 00:07:01.514 Test: blockdev nvme passthru rw ...passed 00:07:01.514 Test: blockdev nvme passthru vendor specific ...[2024-10-15 13:41:15.051074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.514 [2024-10-15 13:41:15.051199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.514 passed 00:07:01.514 Test: blockdev nvme admin passthru ...passed 00:07:01.514 Test: blockdev copy ...passed 00:07:01.514 Suite: bdevio tests on: Nvme2n1 00:07:01.514 Test: blockdev write read block ...passed 00:07:01.514 Test: blockdev write zeroes read block ...passed 00:07:01.514 Test: blockdev write zeroes read no split ...passed 00:07:01.514 Test: blockdev write zeroes read split ...passed 00:07:01.514 Test: blockdev write zeroes read split partial ...passed 00:07:01.514 Test: blockdev reset ...[2024-10-15 13:41:15.114128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:01.514 [2024-10-15 13:41:15.117914] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.514 passed 00:07:01.514 Test: blockdev write read 8 blocks ...passed 00:07:01.514 Test: blockdev write read size > 128k ...passed 00:07:01.514 Test: blockdev write read invalid size ...passed 00:07:01.514 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.514 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.514 Test: blockdev write read max offset ...passed 00:07:01.514 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.514 Test: blockdev writev readv 8 blocks ...passed 00:07:01.514 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.514 Test: blockdev writev readv block ...passed 00:07:01.514 Test: blockdev writev readv size > 128k ...passed 00:07:01.514 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.514 Test: blockdev comparev and writev ...[2024-10-15 13:41:15.137869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4234000 len:0x1000 00:07:01.514 [2024-10-15 13:41:15.137937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.514 passed 00:07:01.514 Test: blockdev nvme passthru rw ...passed 00:07:01.514 Test: blockdev nvme passthru vendor specific ...[2024-10-15 13:41:15.140441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.514 [2024-10-15 13:41:15.140497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.514 passed 00:07:01.515 Test: blockdev nvme admin passthru ...passed 00:07:01.515 Test: blockdev copy ...passed 00:07:01.515 Suite: bdevio tests on: Nvme1n1p2 00:07:01.515 Test: blockdev write read block ...passed 00:07:01.515 Test: blockdev write zeroes read block ...passed 00:07:01.515 Test: blockdev write zeroes read no split ...passed 00:07:01.515 Test: blockdev write zeroes read split ...passed 00:07:01.515 Test: blockdev write zeroes read split partial ...passed 00:07:01.515 Test: blockdev reset ...[2024-10-15 13:41:15.201153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:01.515 [2024-10-15 13:41:15.205291] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.515 passed 00:07:01.515 Test: blockdev write read 8 blocks ...passed 00:07:01.515 Test: blockdev write read size > 128k ...passed 00:07:01.515 Test: blockdev write read invalid size ...passed 00:07:01.515 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.515 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.515 Test: blockdev write read max offset ...passed 00:07:01.515 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.515 Test: blockdev writev readv 8 blocks ...passed 00:07:01.515 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.515 Test: blockdev writev readv block ...passed 00:07:01.515 Test: blockdev writev readv size > 128k ...passed 00:07:01.515 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.515 Test: blockdev comparev and writev ...[2024-10-15 13:41:15.220202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d4230000 len:0x1000 00:07:01.515 [2024-10-15 13:41:15.220287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.515 passed 00:07:01.515 Test: blockdev nvme passthru rw ...passed 00:07:01.515 Test: blockdev nvme passthru vendor specific ...passed 00:07:01.515 Test: blockdev nvme admin passthru ...passed 00:07:01.515 Test: blockdev copy ...passed 00:07:01.515 Suite: bdevio tests on: Nvme1n1p1 00:07:01.515 Test: blockdev write read block ...passed 00:07:01.515 Test: blockdev write zeroes read block ...passed 00:07:01.515 Test: blockdev write zeroes read no split ...passed 00:07:01.515 Test: blockdev write zeroes read split ...passed 00:07:01.515 Test: blockdev write zeroes read split partial ...passed 00:07:01.515 Test: blockdev reset ...[2024-10-15 13:41:15.274496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:01.515 [2024-10-15 13:41:15.278825] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.515 passed 00:07:01.515 Test: blockdev write read 8 blocks ...passed 00:07:01.515 Test: blockdev write read size > 128k ...passed 00:07:01.515 Test: blockdev write read invalid size ...passed 00:07:01.515 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.515 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.515 Test: blockdev write read max offset ...passed 00:07:01.515 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.515 Test: blockdev writev readv 8 blocks ...passed 00:07:01.515 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.515 Test: blockdev writev readv block ...passed 00:07:01.515 Test: blockdev writev readv size > 128k ...passed 00:07:01.515 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.515 Test: blockdev comparev and writev ...[2024-10-15 13:41:15.297069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ba20e000 len:0x1000 00:07:01.515 [2024-10-15 13:41:15.297125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.515 passed 00:07:01.515 Test: blockdev nvme passthru rw ...passed 00:07:01.515 Test: blockdev nvme passthru vendor specific ...passed 00:07:01.515 Test: blockdev nvme admin passthru ...passed 00:07:01.515 Test: blockdev copy ...passed 00:07:01.515 Suite: bdevio tests on: Nvme0n1 00:07:01.775 Test: blockdev write read block ...passed 00:07:01.775 Test: blockdev write zeroes read block ...passed 00:07:01.775 Test: blockdev write zeroes read no split ...passed 00:07:01.775 Test: blockdev write zeroes read split ...passed 00:07:01.775 Test: blockdev write zeroes read split partial ...passed 00:07:01.775 Test: blockdev reset ...[2024-10-15 13:41:15.357276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:01.775 [2024-10-15 13:41:15.361533] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.775 passed 00:07:01.775 Test: blockdev write read 8 blocks ...passed 00:07:01.775 Test: blockdev write read size > 128k ...passed 00:07:01.775 Test: blockdev write read invalid size ...passed 00:07:01.775 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.775 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.775 Test: blockdev write read max offset ...passed 00:07:01.775 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.775 Test: blockdev writev readv 8 blocks ...passed 00:07:01.775 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.775 Test: blockdev writev readv block ...passed 00:07:01.775 Test: blockdev writev readv size > 128k ...passed 00:07:01.775 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.775 Test: blockdev comparev and writev ...passed 00:07:01.775 Test: blockdev nvme passthru rw ...[2024-10-15 13:41:15.378350] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:01.775 separate metadata which is not supported yet. 00:07:01.775 passed 00:07:01.775 Test: blockdev nvme passthru vendor specific ...[2024-10-15 13:41:15.379886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:01.775 [2024-10-15 13:41:15.379935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:01.775 passed 00:07:01.775 Test: blockdev nvme admin passthru ...passed 00:07:01.775 Test: blockdev copy ...passed 00:07:01.775 00:07:01.775 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.775 suites 7 7 n/a 0 0 00:07:01.775 tests 161 161 161 0 0 00:07:01.775 asserts 1025 1025 1025 0 n/a 00:07:01.775 00:07:01.775 Elapsed time = 1.555 seconds 00:07:01.775 0 00:07:01.775 13:41:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61374 00:07:01.775 13:41:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61374 ']' 00:07:01.775 13:41:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61374 00:07:01.775 13:41:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:01.775 13:41:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.775 13:41:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61374 00:07:01.775 13:41:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.775 13:41:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.775 killing process with pid 61374 00:07:01.775 13:41:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61374' 00:07:01.775 13:41:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61374 00:07:01.775 13:41:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61374 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:02.724 00:07:02.724 real 0m2.442s 00:07:02.724 user 0m6.094s 00:07:02.724 sys 0m0.376s 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.724 ************************************ 00:07:02.724 END TEST bdev_bounds 00:07:02.724 ************************************ 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:02.724 13:41:16 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:02.724 13:41:16 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:02.724 13:41:16 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.724 13:41:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.724 ************************************ 00:07:02.724 START TEST bdev_nbd 00:07:02.724 ************************************ 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:02.724 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61433 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61433 /var/tmp/spdk-nbd.sock 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61433 ']' 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:02.725 13:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:02.725 [2024-10-15 13:41:16.325642] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:07:02.725 [2024-10-15 13:41:16.325786] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.725 [2024-10-15 13:41:16.484892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.985 [2024-10-15 13:41:16.612473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.556 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.816 1+0 records in 00:07:03.816 1+0 records out 00:07:03.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000964388 s, 4.2 MB/s 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.816 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.074 1+0 records in 00:07:04.074 1+0 records out 00:07:04.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106302 s, 3.9 MB/s 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.074 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.333 1+0 records in 00:07:04.333 1+0 records out 00:07:04.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118065 s, 3.5 MB/s 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.333 13:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.592 1+0 records in 00:07:04.592 1+0 records out 00:07:04.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117096 s, 3.5 MB/s 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.592 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.850 1+0 records in 00:07:04.850 1+0 records out 00:07:04.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106174 s, 3.9 MB/s 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.850 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.108 1+0 records in 00:07:05.108 1+0 records out 00:07:05.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130701 s, 3.1 MB/s 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.108 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.109 1+0 records in 00:07:05.109 1+0 records out 00:07:05.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105826 s, 3.9 MB/s 00:07:05.109 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.369 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.369 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.370 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.370 13:41:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.370 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.370 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.370 13:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.370 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd0", 00:07:05.370 "bdev_name": "Nvme0n1" 00:07:05.370 }, 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd1", 00:07:05.370 "bdev_name": "Nvme1n1p1" 00:07:05.370 }, 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd2", 00:07:05.370 "bdev_name": "Nvme1n1p2" 00:07:05.370 }, 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd3", 00:07:05.370 "bdev_name": "Nvme2n1" 00:07:05.370 }, 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd4", 00:07:05.370 "bdev_name": "Nvme2n2" 00:07:05.370 }, 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd5", 00:07:05.370 "bdev_name": "Nvme2n3" 00:07:05.370 }, 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd6", 00:07:05.370 "bdev_name": "Nvme3n1" 00:07:05.370 } 00:07:05.370 ]' 00:07:05.370 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:05.370 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd0", 00:07:05.370 "bdev_name": "Nvme0n1" 00:07:05.370 }, 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd1", 00:07:05.370 "bdev_name": "Nvme1n1p1" 00:07:05.370 }, 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd2", 00:07:05.370 "bdev_name": "Nvme1n1p2" 00:07:05.370 }, 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd3", 00:07:05.370 "bdev_name": "Nvme2n1" 00:07:05.370 }, 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd4", 00:07:05.370 "bdev_name": "Nvme2n2" 00:07:05.370 }, 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd5", 00:07:05.370 "bdev_name": "Nvme2n3" 00:07:05.370 }, 00:07:05.370 { 00:07:05.370 "nbd_device": "/dev/nbd6", 00:07:05.370 "bdev_name": "Nvme3n1" 00:07:05.370 } 00:07:05.370 ]' 00:07:05.370 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:05.370 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:05.370 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.370 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:05.370 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.370 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:05.370 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.370 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.628 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.628 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.628 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.628 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.628 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.628 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.628 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.628 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.628 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.628 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.885 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.885 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.885 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.885 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.885 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.885 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.885 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.886 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.886 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.886 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:06.155 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:06.155 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:06.155 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:06.155 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.155 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.155 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:06.155 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.155 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.155 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.155 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:06.442 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:06.442 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:06.442 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:06.442 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.442 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.442 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:06.442 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.442 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.442 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.442 13:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:06.442 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:06.442 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:06.442 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:06.442 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.442 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.442 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:06.442 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.442 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.442 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.442 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:06.701 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:06.701 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:06.701 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:06.701 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.701 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.701 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:06.701 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.701 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.701 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.701 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:06.961 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:06.961 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:06.961 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:06.961 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.961 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.961 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:06.961 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.961 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.961 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.961 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.961 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.222 13:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:07.483 /dev/nbd0 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.483 1+0 records in 00:07:07.483 1+0 records out 00:07:07.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000996719 s, 4.1 MB/s 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.483 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:07.745 /dev/nbd1 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.745 1+0 records in 00:07:07.745 1+0 records out 00:07:07.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775543 s, 5.3 MB/s 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.745 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:08.006 /dev/nbd10 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.006 1+0 records in 00:07:08.006 1+0 records out 00:07:08.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124031 s, 3.3 MB/s 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.006 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.007 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.007 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.007 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:08.007 /dev/nbd11 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.268 1+0 records in 00:07:08.268 1+0 records out 00:07:08.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105633 s, 3.9 MB/s 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.268 13:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:08.268 /dev/nbd12 00:07:08.268 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:08.268 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:08.268 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:08.268 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.268 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.268 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.268 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:08.268 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.268 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.268 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.268 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.530 1+0 records in 00:07:08.530 1+0 records out 00:07:08.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124657 s, 3.3 MB/s 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:08.530 /dev/nbd13 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.530 1+0 records in 00:07:08.530 1+0 records out 00:07:08.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138552 s, 3.0 MB/s 00:07:08.530 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:08.791 /dev/nbd14 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.791 1+0 records in 00:07:08.791 1+0 records out 00:07:08.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000908532 s, 4.5 MB/s 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.791 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.792 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.792 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.792 13:41:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.792 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.792 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.792 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.792 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.053 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.053 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.053 { 00:07:09.053 "nbd_device": "/dev/nbd0", 00:07:09.053 "bdev_name": "Nvme0n1" 00:07:09.053 }, 00:07:09.053 { 00:07:09.053 "nbd_device": "/dev/nbd1", 00:07:09.053 "bdev_name": "Nvme1n1p1" 00:07:09.053 }, 00:07:09.053 { 00:07:09.053 "nbd_device": "/dev/nbd10", 00:07:09.053 "bdev_name": "Nvme1n1p2" 00:07:09.053 }, 00:07:09.053 { 00:07:09.053 "nbd_device": "/dev/nbd11", 00:07:09.053 "bdev_name": "Nvme2n1" 00:07:09.053 }, 00:07:09.053 { 00:07:09.053 "nbd_device": "/dev/nbd12", 00:07:09.053 "bdev_name": "Nvme2n2" 00:07:09.053 }, 00:07:09.053 { 00:07:09.053 "nbd_device": "/dev/nbd13", 00:07:09.053 "bdev_name": "Nvme2n3" 00:07:09.053 }, 00:07:09.053 { 00:07:09.054 "nbd_device": "/dev/nbd14", 00:07:09.054 "bdev_name": "Nvme3n1" 00:07:09.054 } 00:07:09.054 ]' 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.054 { 00:07:09.054 "nbd_device": "/dev/nbd0", 00:07:09.054 "bdev_name": "Nvme0n1" 00:07:09.054 }, 00:07:09.054 { 00:07:09.054 "nbd_device": "/dev/nbd1", 00:07:09.054 "bdev_name": "Nvme1n1p1" 00:07:09.054 }, 00:07:09.054 { 00:07:09.054 "nbd_device": "/dev/nbd10", 00:07:09.054 "bdev_name": "Nvme1n1p2" 00:07:09.054 }, 00:07:09.054 { 00:07:09.054 "nbd_device": "/dev/nbd11", 00:07:09.054 "bdev_name": "Nvme2n1" 00:07:09.054 }, 00:07:09.054 { 00:07:09.054 "nbd_device": "/dev/nbd12", 00:07:09.054 "bdev_name": "Nvme2n2" 00:07:09.054 }, 00:07:09.054 { 00:07:09.054 "nbd_device": "/dev/nbd13", 00:07:09.054 "bdev_name": "Nvme2n3" 00:07:09.054 }, 00:07:09.054 { 00:07:09.054 "nbd_device": "/dev/nbd14", 00:07:09.054 "bdev_name": "Nvme3n1" 00:07:09.054 } 00:07:09.054 ]' 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.054 /dev/nbd1 00:07:09.054 /dev/nbd10 00:07:09.054 /dev/nbd11 00:07:09.054 /dev/nbd12 00:07:09.054 /dev/nbd13 00:07:09.054 /dev/nbd14' 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.054 /dev/nbd1 00:07:09.054 /dev/nbd10 00:07:09.054 /dev/nbd11 00:07:09.054 /dev/nbd12 00:07:09.054 /dev/nbd13 00:07:09.054 /dev/nbd14' 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.054 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:09.054 256+0 records in 00:07:09.054 256+0 records out 00:07:09.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472295 s, 222 MB/s 00:07:09.315 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.315 13:41:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.576 256+0 records in 00:07:09.576 256+0 records out 00:07:09.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.270673 s, 3.9 MB/s 00:07:09.576 13:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.576 13:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:09.837 256+0 records in 00:07:09.837 256+0 records out 00:07:09.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.263859 s, 4.0 MB/s 00:07:09.837 13:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.837 13:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:09.837 256+0 records in 00:07:09.837 256+0 records out 00:07:09.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209377 s, 5.0 MB/s 00:07:09.837 13:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.837 13:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:10.098 256+0 records in 00:07:10.098 256+0 records out 00:07:10.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.273612 s, 3.8 MB/s 00:07:10.098 13:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.098 13:41:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:10.671 256+0 records in 00:07:10.671 256+0 records out 00:07:10.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.278469 s, 3.8 MB/s 00:07:10.671 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.671 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:10.671 256+0 records in 00:07:10.671 256+0 records out 00:07:10.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.280949 s, 3.7 MB/s 00:07:10.671 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.671 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:10.933 256+0 records in 00:07:10.933 256+0 records out 00:07:10.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.233637 s, 4.5 MB/s 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.933 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.197 13:41:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:11.458 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:11.458 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:11.458 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:11.458 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.458 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.458 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.458 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.458 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.458 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.458 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:11.720 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:11.720 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:11.720 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:11.720 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.720 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.720 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:11.720 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.720 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.720 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.720 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:11.981 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:11.981 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:11.981 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:11.981 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.981 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.981 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:11.981 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.981 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.981 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.981 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:12.242 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:12.242 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:12.242 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:12.242 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.242 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.242 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:12.242 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.242 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.242 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.242 13:41:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:12.503 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:12.503 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:12.503 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:12.503 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.503 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.503 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:12.503 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.503 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.503 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.503 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:12.764 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:12.764 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:12.764 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:12.764 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.764 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.764 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:12.764 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.764 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.764 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.764 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.764 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:13.026 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:13.287 malloc_lvol_verify 00:07:13.287 13:41:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:13.287 950b58f6-651e-4a37-8942-be88246ed5c7 00:07:13.549 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:13.549 ac725ef5-bd67-4838-98f6-e1aaac222f26 00:07:13.549 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:13.807 /dev/nbd0 00:07:13.807 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:13.807 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:13.807 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:13.807 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:13.807 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:13.807 mke2fs 1.47.0 (5-Feb-2023) 00:07:13.807 Discarding device blocks: 0/4096 done 00:07:13.807 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:13.807 00:07:13.807 Allocating group tables: 0/1 done 00:07:13.807 Writing inode tables: 0/1 done 00:07:13.807 Creating journal (1024 blocks): done 00:07:13.807 Writing superblocks and filesystem accounting information: 0/1 done 00:07:13.807 00:07:13.807 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:13.807 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.807 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:13.807 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.807 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:13.807 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.807 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61433 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61433 ']' 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61433 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61433 00:07:14.066 killing process with pid 61433 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61433' 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61433 00:07:14.066 13:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61433 00:07:15.008 13:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:15.008 00:07:15.008 real 0m12.322s 00:07:15.008 user 0m16.464s 00:07:15.008 sys 0m4.249s 00:07:15.008 13:41:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.008 ************************************ 00:07:15.008 END TEST bdev_nbd 00:07:15.008 ************************************ 00:07:15.008 13:41:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:15.008 13:41:28 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:15.008 13:41:28 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:15.008 skipping fio tests on NVMe due to multi-ns failures. 00:07:15.008 13:41:28 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:15.008 13:41:28 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:15.008 13:41:28 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:15.008 13:41:28 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:15.008 13:41:28 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:15.008 13:41:28 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.008 13:41:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:15.008 ************************************ 00:07:15.008 START TEST bdev_verify 00:07:15.008 ************************************ 00:07:15.008 13:41:28 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:15.009 [2024-10-15 13:41:28.695746] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:07:15.009 [2024-10-15 13:41:28.695864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61862 ] 00:07:15.268 [2024-10-15 13:41:28.839959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.268 [2024-10-15 13:41:28.960335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.268 [2024-10-15 13:41:28.960347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.892 Running I/O for 5 seconds... 00:07:18.219 17024.00 IOPS, 66.50 MiB/s [2024-10-15T13:41:32.949Z] 17600.00 IOPS, 68.75 MiB/s [2024-10-15T13:41:33.890Z] 17642.67 IOPS, 68.92 MiB/s [2024-10-15T13:41:34.834Z] 17424.00 IOPS, 68.06 MiB/s [2024-10-15T13:41:34.834Z] 17523.20 IOPS, 68.45 MiB/s 00:07:21.046 Latency(us) 00:07:21.046 [2024-10-15T13:41:34.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.046 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0x0 length 0xbd0bd 00:07:21.046 Nvme0n1 : 5.08 1196.30 4.67 0.00 0.00 106378.43 13812.97 86709.17 00:07:21.046 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:21.046 Nvme0n1 : 5.06 1264.27 4.94 0.00 0.00 100973.62 23794.61 81869.59 00:07:21.046 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0x0 length 0x4ff80 00:07:21.046 Nvme1n1p1 : 5.10 1203.54 4.70 0.00 0.00 105990.94 17341.83 88322.36 00:07:21.046 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:21.046 Nvme1n1p1 : 5.06 1263.83 4.94 0.00 0.00 100838.71 22786.36 75820.11 00:07:21.046 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0x0 length 0x4ff7f 00:07:21.046 Nvme1n1p2 : 5.11 1202.74 4.70 0.00 0.00 105768.47 19156.68 85499.27 00:07:21.046 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:21.046 Nvme1n1p2 : 5.07 1263.39 4.94 0.00 0.00 100608.87 22584.71 76626.71 00:07:21.046 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0x0 length 0x80000 00:07:21.046 Nvme2n1 : 5.11 1202.39 4.70 0.00 0.00 105651.77 19055.85 79449.80 00:07:21.046 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0x80000 length 0x80000 00:07:21.046 Nvme2n1 : 5.07 1262.97 4.93 0.00 0.00 100418.45 20769.87 75416.81 00:07:21.046 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0x0 length 0x80000 00:07:21.046 Nvme2n2 : 5.11 1202.05 4.70 0.00 0.00 105528.35 18854.20 81466.29 00:07:21.046 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0x80000 length 0x80000 00:07:21.046 Nvme2n2 : 5.07 1262.54 4.93 0.00 0.00 100275.84 20265.75 79046.50 00:07:21.046 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0x0 length 0x80000 00:07:21.046 Nvme2n3 : 5.11 1201.70 4.69 0.00 0.00 105381.69 19055.85 85499.27 00:07:21.046 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0x80000 length 0x80000 00:07:21.046 Nvme2n3 : 5.08 1272.44 4.97 0.00 0.00 99410.38 4839.58 80659.69 00:07:21.046 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0x0 length 0x20000 00:07:21.046 Nvme3n1 : 5.11 1201.36 4.69 0.00 0.00 105256.17 19257.50 87919.06 00:07:21.046 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:21.046 Verification LBA range: start 0x20000 length 0x20000 00:07:21.046 Nvme3n1 : 5.08 1271.99 4.97 0.00 0.00 99308.56 4360.66 83482.78 00:07:21.046 [2024-10-15T13:41:34.834Z] =================================================================================================================== 00:07:21.046 [2024-10-15T13:41:34.834Z] Total : 17271.52 67.47 0.00 0.00 102921.73 4360.66 88322.36 00:07:22.481 ************************************ 00:07:22.481 END TEST bdev_verify 00:07:22.481 ************************************ 00:07:22.481 00:07:22.481 real 0m7.259s 00:07:22.481 user 0m13.452s 00:07:22.481 sys 0m0.275s 00:07:22.481 13:41:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.481 13:41:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:22.481 13:41:35 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:22.481 13:41:35 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:22.481 13:41:35 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.481 13:41:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.481 ************************************ 00:07:22.481 START TEST bdev_verify_big_io 00:07:22.481 ************************************ 00:07:22.481 13:41:35 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:22.481 [2024-10-15 13:41:36.034927] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:07:22.481 [2024-10-15 13:41:36.035090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61960 ] 00:07:22.481 [2024-10-15 13:41:36.192815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.742 [2024-10-15 13:41:36.316911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.742 [2024-10-15 13:41:36.317044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.314 Running I/O for 5 seconds... 00:07:27.549 885.00 IOPS, 55.31 MiB/s [2024-10-15T13:41:43.238Z] 1878.00 IOPS, 117.38 MiB/s [2024-10-15T13:41:43.495Z] 2242.33 IOPS, 140.15 MiB/s 00:07:29.707 Latency(us) 00:07:29.707 [2024-10-15T13:41:43.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.707 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0x0 length 0xbd0b 00:07:29.707 Nvme0n1 : 6.03 94.70 5.92 0.00 0.00 1259816.90 18955.03 1380893.93 00:07:29.707 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:29.707 Nvme0n1 : 5.93 91.81 5.74 0.00 0.00 1323550.49 32667.18 1290555.08 00:07:29.707 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0x0 length 0x4ff8 00:07:29.707 Nvme1n1p1 : 6.13 100.54 6.28 0.00 0.00 1177484.85 98808.12 1103424.59 00:07:29.707 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:29.707 Nvme1n1p1 : 6.03 101.30 6.33 0.00 0.00 1178424.90 55655.19 1155046.79 00:07:29.707 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0x0 length 0x4ff7 00:07:29.707 Nvme1n1p2 : 6.13 99.69 6.23 0.00 0.00 1143332.00 110503.78 1090519.04 00:07:29.707 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:29.707 Nvme1n1p2 : 6.03 102.43 6.40 0.00 0.00 1130600.25 56058.49 1025991.29 00:07:29.707 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0x0 length 0x8000 00:07:29.707 Nvme2n1 : 6.15 103.23 6.45 0.00 0.00 1085768.69 95178.44 1096971.82 00:07:29.707 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0x8000 length 0x8000 00:07:29.707 Nvme2n1 : 6.04 105.70 6.61 0.00 0.00 1071883.87 54848.59 1167952.34 00:07:29.707 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0x0 length 0x8000 00:07:29.707 Nvme2n2 : 6.13 95.39 5.96 0.00 0.00 1131717.53 88322.36 2039077.02 00:07:29.707 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0x8000 length 0x8000 00:07:29.707 Nvme2n2 : 6.04 105.99 6.62 0.00 0.00 1033171.57 102437.81 1071160.71 00:07:29.707 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0x0 length 0x8000 00:07:29.707 Nvme2n3 : 6.17 106.11 6.63 0.00 0.00 996808.95 16333.59 2064888.12 00:07:29.707 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0x8000 length 0x8000 00:07:29.707 Nvme2n3 : 6.14 114.64 7.17 0.00 0.00 928780.46 38716.65 1096971.82 00:07:29.707 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0x0 length 0x2000 00:07:29.707 Nvme3n1 : 6.19 116.44 7.28 0.00 0.00 876055.19 3654.89 2103604.78 00:07:29.707 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.707 Verification LBA range: start 0x2000 length 0x2000 00:07:29.707 Nvme3n1 : 6.16 125.52 7.84 0.00 0.00 822766.73 2545.82 1200216.22 00:07:29.707 [2024-10-15T13:41:43.495Z] =================================================================================================================== 00:07:29.707 [2024-10-15T13:41:43.495Z] Total : 1463.49 91.47 0.00 0.00 1070726.00 2545.82 2103604.78 00:07:31.092 00:07:31.092 real 0m8.915s 00:07:31.092 user 0m16.749s 00:07:31.092 sys 0m0.312s 00:07:31.092 13:41:44 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.092 ************************************ 00:07:31.092 END TEST bdev_verify_big_io 00:07:31.092 ************************************ 00:07:31.092 13:41:44 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:31.353 13:41:44 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:31.353 13:41:44 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:31.353 13:41:44 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.353 13:41:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.353 ************************************ 00:07:31.353 START TEST bdev_write_zeroes 00:07:31.353 ************************************ 00:07:31.353 13:41:44 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:31.353 [2024-10-15 13:41:45.005404] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:07:31.353 [2024-10-15 13:41:45.006056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62075 ] 00:07:31.614 [2024-10-15 13:41:45.161003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.614 [2024-10-15 13:41:45.288170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.205 Running I/O for 1 seconds... 00:07:33.410 53312.00 IOPS, 208.25 MiB/s 00:07:33.410 Latency(us) 00:07:33.410 [2024-10-15T13:41:47.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.410 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.410 Nvme0n1 : 1.02 7642.46 29.85 0.00 0.00 16703.80 6604.01 28230.89 00:07:33.410 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.410 Nvme1n1p1 : 1.02 7632.99 29.82 0.00 0.00 16696.74 12703.90 25609.45 00:07:33.410 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.410 Nvme1n1p2 : 1.02 7623.49 29.78 0.00 0.00 16630.11 12048.54 24399.56 00:07:33.410 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.410 Nvme2n1 : 1.03 7654.43 29.90 0.00 0.00 16533.35 8570.09 22181.42 00:07:33.410 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.410 Nvme2n2 : 1.03 7645.81 29.87 0.00 0.00 16520.28 8922.98 22786.36 00:07:33.410 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.410 Nvme2n3 : 1.03 7610.04 29.73 0.00 0.00 16554.82 10586.58 22786.36 00:07:33.410 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.410 Nvme3n1 : 1.03 7601.42 29.69 0.00 0.00 16534.98 9326.28 24097.08 00:07:33.410 [2024-10-15T13:41:47.198Z] =================================================================================================================== 00:07:33.410 [2024-10-15T13:41:47.198Z] Total : 53410.63 208.64 0.00 0.00 16596.13 6604.01 28230.89 00:07:33.982 00:07:33.982 real 0m2.784s 00:07:33.982 user 0m2.420s 00:07:33.982 sys 0m0.244s 00:07:33.982 13:41:47 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.982 13:41:47 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:33.982 ************************************ 00:07:33.982 END TEST bdev_write_zeroes 00:07:33.982 ************************************ 00:07:34.244 13:41:47 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.244 13:41:47 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:34.244 13:41:47 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.244 13:41:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.244 ************************************ 00:07:34.244 START TEST bdev_json_nonenclosed 00:07:34.244 ************************************ 00:07:34.244 13:41:47 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.244 [2024-10-15 13:41:47.863040] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:07:34.244 [2024-10-15 13:41:47.863175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62128 ] 00:07:34.244 [2024-10-15 13:41:48.016457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.504 [2024-10-15 13:41:48.134248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.504 [2024-10-15 13:41:48.134339] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:34.504 [2024-10-15 13:41:48.134358] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:34.504 [2024-10-15 13:41:48.134369] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.763 00:07:34.763 real 0m0.531s 00:07:34.763 user 0m0.311s 00:07:34.763 sys 0m0.115s 00:07:34.763 13:41:48 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.763 ************************************ 00:07:34.763 END TEST bdev_json_nonenclosed 00:07:34.764 ************************************ 00:07:34.764 13:41:48 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:34.764 13:41:48 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.764 13:41:48 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:34.764 13:41:48 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.764 13:41:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.764 ************************************ 00:07:34.764 START TEST bdev_json_nonarray 00:07:34.764 ************************************ 00:07:34.764 13:41:48 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.764 [2024-10-15 13:41:48.468117] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:07:34.764 [2024-10-15 13:41:48.468294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62154 ] 00:07:35.025 [2024-10-15 13:41:48.615968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.025 [2024-10-15 13:41:48.739828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.025 [2024-10-15 13:41:48.739935] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:35.025 [2024-10-15 13:41:48.739954] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:35.025 [2024-10-15 13:41:48.739965] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.287 00:07:35.287 real 0m0.536s 00:07:35.287 user 0m0.325s 00:07:35.287 sys 0m0.105s 00:07:35.287 13:41:48 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.287 ************************************ 00:07:35.287 END TEST bdev_json_nonarray 00:07:35.287 ************************************ 00:07:35.287 13:41:48 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:35.287 13:41:48 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:35.287 13:41:48 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:35.287 13:41:48 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:35.287 13:41:48 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.287 13:41:48 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.287 13:41:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:35.287 ************************************ 00:07:35.287 START TEST bdev_gpt_uuid 00:07:35.287 ************************************ 00:07:35.287 13:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:35.287 13:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:35.287 13:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:35.287 13:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62179 00:07:35.287 13:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:35.287 13:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:35.287 13:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62179 00:07:35.287 13:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 62179 ']' 00:07:35.287 13:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.287 13:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.287 13:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.287 13:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.287 13:41:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:35.547 [2024-10-15 13:41:49.090322] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:07:35.547 [2024-10-15 13:41:49.090469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62179 ] 00:07:35.547 [2024-10-15 13:41:49.237748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.806 [2024-10-15 13:41:49.367475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.377 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.377 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:36.377 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:36.377 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.377 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.638 Some configs were skipped because the RPC state that can call them passed over. 00:07:36.638 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.638 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:36.638 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.638 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.638 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.638 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:36.638 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.638 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.899 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.899 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:36.899 { 00:07:36.899 "name": "Nvme1n1p1", 00:07:36.899 "aliases": [ 00:07:36.899 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:36.899 ], 00:07:36.899 "product_name": "GPT Disk", 00:07:36.899 "block_size": 4096, 00:07:36.899 "num_blocks": 655104, 00:07:36.899 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:36.899 "assigned_rate_limits": { 00:07:36.899 "rw_ios_per_sec": 0, 00:07:36.899 "rw_mbytes_per_sec": 0, 00:07:36.899 "r_mbytes_per_sec": 0, 00:07:36.899 "w_mbytes_per_sec": 0 00:07:36.899 }, 00:07:36.899 "claimed": false, 00:07:36.899 "zoned": false, 00:07:36.899 "supported_io_types": { 00:07:36.899 "read": true, 00:07:36.899 "write": true, 00:07:36.899 "unmap": true, 00:07:36.899 "flush": true, 00:07:36.899 "reset": true, 00:07:36.899 "nvme_admin": false, 00:07:36.899 "nvme_io": false, 00:07:36.899 "nvme_io_md": false, 00:07:36.899 "write_zeroes": true, 00:07:36.899 "zcopy": false, 00:07:36.899 "get_zone_info": false, 00:07:36.899 "zone_management": false, 00:07:36.899 "zone_append": false, 00:07:36.899 "compare": true, 00:07:36.899 "compare_and_write": false, 00:07:36.899 "abort": true, 00:07:36.899 "seek_hole": false, 00:07:36.899 "seek_data": false, 00:07:36.899 "copy": true, 00:07:36.899 "nvme_iov_md": false 00:07:36.899 }, 00:07:36.899 "driver_specific": { 00:07:36.899 "gpt": { 00:07:36.899 "base_bdev": "Nvme1n1", 00:07:36.899 "offset_blocks": 256, 00:07:36.899 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:36.899 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:36.899 "partition_name": "SPDK_TEST_first" 00:07:36.899 } 00:07:36.899 } 00:07:36.899 } 00:07:36.899 ]' 00:07:36.899 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:36.899 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:36.899 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:36.899 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:36.899 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:36.899 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:36.899 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:36.899 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.899 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:36.900 { 00:07:36.900 "name": "Nvme1n1p2", 00:07:36.900 "aliases": [ 00:07:36.900 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:36.900 ], 00:07:36.900 "product_name": "GPT Disk", 00:07:36.900 "block_size": 4096, 00:07:36.900 "num_blocks": 655103, 00:07:36.900 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:36.900 "assigned_rate_limits": { 00:07:36.900 "rw_ios_per_sec": 0, 00:07:36.900 "rw_mbytes_per_sec": 0, 00:07:36.900 "r_mbytes_per_sec": 0, 00:07:36.900 "w_mbytes_per_sec": 0 00:07:36.900 }, 00:07:36.900 "claimed": false, 00:07:36.900 "zoned": false, 00:07:36.900 "supported_io_types": { 00:07:36.900 "read": true, 00:07:36.900 "write": true, 00:07:36.900 "unmap": true, 00:07:36.900 "flush": true, 00:07:36.900 "reset": true, 00:07:36.900 "nvme_admin": false, 00:07:36.900 "nvme_io": false, 00:07:36.900 "nvme_io_md": false, 00:07:36.900 "write_zeroes": true, 00:07:36.900 "zcopy": false, 00:07:36.900 "get_zone_info": false, 00:07:36.900 "zone_management": false, 00:07:36.900 "zone_append": false, 00:07:36.900 "compare": true, 00:07:36.900 "compare_and_write": false, 00:07:36.900 "abort": true, 00:07:36.900 "seek_hole": false, 00:07:36.900 "seek_data": false, 00:07:36.900 "copy": true, 00:07:36.900 "nvme_iov_md": false 00:07:36.900 }, 00:07:36.900 "driver_specific": { 00:07:36.900 "gpt": { 00:07:36.900 "base_bdev": "Nvme1n1", 00:07:36.900 "offset_blocks": 655360, 00:07:36.900 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:36.900 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:36.900 "partition_name": "SPDK_TEST_second" 00:07:36.900 } 00:07:36.900 } 00:07:36.900 } 00:07:36.900 ]' 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62179 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 62179 ']' 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 62179 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62179 00:07:36.900 killing process with pid 62179 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62179' 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 62179 00:07:36.900 13:41:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 62179 00:07:38.845 00:07:38.845 real 0m3.296s 00:07:38.845 user 0m3.322s 00:07:38.845 sys 0m0.492s 00:07:38.845 13:41:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.845 ************************************ 00:07:38.845 END TEST bdev_gpt_uuid 00:07:38.845 ************************************ 00:07:38.845 13:41:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:38.845 13:41:52 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:38.845 13:41:52 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:38.845 13:41:52 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:38.845 13:41:52 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:38.845 13:41:52 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:38.845 13:41:52 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:38.845 13:41:52 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:38.845 13:41:52 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:38.845 13:41:52 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:39.106 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.106 Waiting for block devices as requested 00:07:39.106 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.366 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.366 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.629 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:44.896 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:44.896 13:41:58 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:44.896 13:41:58 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:44.896 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:44.896 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:44.896 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:44.896 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:44.896 13:41:58 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:44.896 00:07:44.896 real 0m58.918s 00:07:44.896 user 1m13.732s 00:07:44.896 sys 0m9.157s 00:07:44.896 13:41:58 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.896 13:41:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.896 ************************************ 00:07:44.896 END TEST blockdev_nvme_gpt 00:07:44.896 ************************************ 00:07:44.896 13:41:58 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:44.896 13:41:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.896 13:41:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.896 13:41:58 -- common/autotest_common.sh@10 -- # set +x 00:07:44.896 ************************************ 00:07:44.896 START TEST nvme 00:07:44.896 ************************************ 00:07:44.896 13:41:58 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:44.896 * Looking for test storage... 00:07:44.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:44.896 13:41:58 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:44.896 13:41:58 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:07:44.896 13:41:58 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:44.896 13:41:58 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:44.896 13:41:58 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.896 13:41:58 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.896 13:41:58 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.896 13:41:58 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.896 13:41:58 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.896 13:41:58 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.896 13:41:58 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.896 13:41:58 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.896 13:41:58 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.896 13:41:58 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.896 13:41:58 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.896 13:41:58 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:44.896 13:41:58 nvme -- scripts/common.sh@345 -- # : 1 00:07:44.896 13:41:58 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.896 13:41:58 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.896 13:41:58 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:44.896 13:41:58 nvme -- scripts/common.sh@353 -- # local d=1 00:07:44.896 13:41:58 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.896 13:41:58 nvme -- scripts/common.sh@355 -- # echo 1 00:07:44.896 13:41:58 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.896 13:41:58 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:44.896 13:41:58 nvme -- scripts/common.sh@353 -- # local d=2 00:07:44.896 13:41:58 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.896 13:41:58 nvme -- scripts/common.sh@355 -- # echo 2 00:07:44.896 13:41:58 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.153 13:41:58 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.154 13:41:58 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.154 13:41:58 nvme -- scripts/common.sh@368 -- # return 0 00:07:45.154 13:41:58 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.154 13:41:58 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:45.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.154 --rc genhtml_branch_coverage=1 00:07:45.154 --rc genhtml_function_coverage=1 00:07:45.154 --rc genhtml_legend=1 00:07:45.154 --rc geninfo_all_blocks=1 00:07:45.154 --rc geninfo_unexecuted_blocks=1 00:07:45.154 00:07:45.154 ' 00:07:45.154 13:41:58 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:45.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.154 --rc genhtml_branch_coverage=1 00:07:45.154 --rc genhtml_function_coverage=1 00:07:45.154 --rc genhtml_legend=1 00:07:45.154 --rc geninfo_all_blocks=1 00:07:45.154 --rc geninfo_unexecuted_blocks=1 00:07:45.154 00:07:45.154 ' 00:07:45.154 13:41:58 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:45.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.154 --rc genhtml_branch_coverage=1 00:07:45.154 --rc genhtml_function_coverage=1 00:07:45.154 --rc genhtml_legend=1 00:07:45.154 --rc geninfo_all_blocks=1 00:07:45.154 --rc geninfo_unexecuted_blocks=1 00:07:45.154 00:07:45.154 ' 00:07:45.154 13:41:58 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:45.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.154 --rc genhtml_branch_coverage=1 00:07:45.154 --rc genhtml_function_coverage=1 00:07:45.154 --rc genhtml_legend=1 00:07:45.154 --rc geninfo_all_blocks=1 00:07:45.154 --rc geninfo_unexecuted_blocks=1 00:07:45.154 00:07:45.154 ' 00:07:45.154 13:41:58 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:45.411 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:45.983 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.983 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.983 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.983 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.983 13:41:59 nvme -- nvme/nvme.sh@79 -- # uname 00:07:45.983 13:41:59 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:45.983 13:41:59 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:45.983 13:41:59 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:45.983 13:41:59 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:45.983 13:41:59 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:45.983 13:41:59 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:45.983 Waiting for stub to ready for secondary processes... 00:07:45.983 13:41:59 nvme -- common/autotest_common.sh@1071 -- # stubpid=62821 00:07:45.983 13:41:59 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:45.983 13:41:59 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:45.983 13:41:59 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/62821 ]] 00:07:45.983 13:41:59 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:45.983 13:41:59 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:45.983 [2024-10-15 13:41:59.720343] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:07:45.983 [2024-10-15 13:41:59.720461] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:46.922 [2024-10-15 13:42:00.466573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.922 [2024-10-15 13:42:00.563302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.922 [2024-10-15 13:42:00.563550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.922 [2024-10-15 13:42:00.563614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.922 [2024-10-15 13:42:00.585440] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:46.922 [2024-10-15 13:42:00.585548] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:46.922 [2024-10-15 13:42:00.598778] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:46.922 [2024-10-15 13:42:00.598854] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:46.922 [2024-10-15 13:42:00.602671] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:46.922 [2024-10-15 13:42:00.602989] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:46.922 [2024-10-15 13:42:00.603077] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:46.922 [2024-10-15 13:42:00.606025] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:46.922 [2024-10-15 13:42:00.606321] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:46.922 [2024-10-15 13:42:00.606417] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:46.923 [2024-10-15 13:42:00.610372] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:46.923 [2024-10-15 13:42:00.610659] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:46.923 [2024-10-15 13:42:00.610717] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:46.923 [2024-10-15 13:42:00.610748] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:46.923 [2024-10-15 13:42:00.610775] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:46.923 done. 00:07:46.923 13:42:00 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:46.923 13:42:00 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:46.923 13:42:00 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:46.923 13:42:00 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:46.923 13:42:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.923 13:42:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.923 ************************************ 00:07:46.923 START TEST nvme_reset 00:07:46.923 ************************************ 00:07:46.923 13:42:00 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:47.185 Initializing NVMe Controllers 00:07:47.185 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:47.185 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:47.185 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:47.185 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:47.185 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:47.185 00:07:47.185 real 0m0.197s 00:07:47.185 user 0m0.058s 00:07:47.185 sys 0m0.095s 00:07:47.185 13:42:00 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.185 13:42:00 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:47.185 ************************************ 00:07:47.185 END TEST nvme_reset 00:07:47.185 ************************************ 00:07:47.185 13:42:00 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:47.185 13:42:00 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.185 13:42:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.185 13:42:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.185 ************************************ 00:07:47.185 START TEST nvme_identify 00:07:47.185 ************************************ 00:07:47.185 13:42:00 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:47.185 13:42:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:47.185 13:42:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:47.185 13:42:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:47.185 13:42:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:47.185 13:42:00 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:47.185 13:42:00 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:47.185 13:42:00 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:47.185 13:42:00 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:47.185 13:42:00 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:47.449 13:42:01 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:47.449 13:42:01 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:47.449 13:42:01 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:47.449 ===================================================== 00:07:47.449 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:47.449 ===================================================== 00:07:47.449 Controller Capabilities/Features 00:07:47.449 ================================ 00:07:47.449 Vendor ID: 1b36 00:07:47.449 Subsystem Vendor ID: 1af4 00:07:47.449 Serial Number: 12341 00:07:47.449 Model Number: QEMU NVMe Ctrl 00:07:47.449 Firmware Version: 8.0.0 00:07:47.449 Recommended Arb Burst: 6 00:07:47.449 IEEE OUI Identifier: 00 54 52 00:07:47.449 Multi-path I/O 00:07:47.449 May have multiple subsystem ports: No 00:07:47.449 May have multiple controllers: No 00:07:47.449 Associated with SR-IOV VF: No 00:07:47.449 Max Data Transfer Size: 524288 00:07:47.449 Max Number of Namespaces: 256 00:07:47.449 Max Number of I/O Queues: 64 00:07:47.449 NVMe Specification Version (VS): 1.4 00:07:47.449 NVMe Specification Version (Identify): 1.4 00:07:47.449 Maximum Queue Entries: 2048 00:07:47.449 Contiguous Queues Required: Yes 00:07:47.449 Arbitration Mechanisms Supported 00:07:47.449 Weighted Round Robin: Not Supported 00:07:47.449 Vendor Specific: Not Supported 00:07:47.449 Reset Timeout: 7500 ms 00:07:47.449 Doorbell Stride: 4 bytes 00:07:47.449 NVM Subsystem Reset: Not Supported 00:07:47.449 Command Sets Supported 00:07:47.449 NVM Command Set: Supported 00:07:47.449 Boot Partition: Not Supported 00:07:47.449 Memory Page Size Minimum: 4096 bytes 00:07:47.449 Memory Page Size Maximum: 65536 bytes 00:07:47.449 Persistent Memory Region: Not Supported 00:07:47.449 Optional Asynchronous Events Supported 00:07:47.449 Namespace Attribute Notices: Supported 00:07:47.449 Firmware Activation Notices: Not Supported 00:07:47.449 ANA Change Notices: Not Supported 00:07:47.449 PLE Aggregate Log Change Notices: Not Supported 00:07:47.449 LBA Status Info Alert Notices: Not Supported 00:07:47.449 EGE Aggregate Log Change Notices: Not Supported 00:07:47.449 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.449 Zone Descriptor Change Notices: Not Supported 00:07:47.449 Discovery Log Change Notices: Not Supported 00:07:47.449 Controller Attributes 00:07:47.449 128-bit Host Identifier: Not Supported 00:07:47.449 Non-Operational Permissive Mode: Not Supported 00:07:47.449 NVM Sets: Not Supported 00:07:47.449 Read Recovery Levels: Not Supported 00:07:47.449 Endurance Groups: Not Supported 00:07:47.449 Predictable Latency Mode: Not Supported 00:07:47.449 Traffic Based Keep ALive: Not Supported 00:07:47.449 Namespace Granularity: Not Supported 00:07:47.449 SQ Associations: Not Supported 00:07:47.449 UUID List: Not Supported 00:07:47.449 Multi-Domain Subsystem: Not Supported 00:07:47.449 Fixed Capacity Management: Not Supported 00:07:47.449 Variable Capacity Management: Not Supported 00:07:47.449 Delete Endurance Group: Not Supported 00:07:47.449 Delete NVM Set: Not Supported 00:07:47.449 Extended LBA Formats Supported: Supported 00:07:47.449 Flexible Data Placement Supported: Not Supported 00:07:47.449 00:07:47.449 Controller Memory Buffer Support 00:07:47.449 ================================ 00:07:47.449 Supported: No 00:07:47.449 00:07:47.449 Persistent Memory Region Support 00:07:47.449 ================================ 00:07:47.449 Supported: No 00:07:47.449 00:07:47.449 Admin Command Set Attributes 00:07:47.449 ============================ 00:07:47.449 Security Send/Receive: Not Supported 00:07:47.449 Format NVM: Supported 00:07:47.449 Firmware Activate/Download: Not Supported 00:07:47.449 Namespace Management: Supported 00:07:47.449 Device Self-Test: Not Supported 00:07:47.449 Directives: Supported 00:07:47.449 NVMe-MI: Not Supported 00:07:47.449 Virtualization Management: Not Supported 00:07:47.449 Doorbell Buffer Config: Supported 00:07:47.449 Get LBA Status Capability: Not Supported 00:07:47.449 Command & Feature Lockdown Capability: Not Supported 00:07:47.449 Abort Command Limit: 4 00:07:47.449 Async Event Request Limit: 4 00:07:47.449 Number of Firmware Slots: N/A 00:07:47.449 Firmware Slot 1 Read-Only: N/A 00:07:47.449 Firmware Activation Without Reset: N/A 00:07:47.449 Multiple Update Detection Support: N/A 00:07:47.449 Firmware Update Granularity: No Information Provided 00:07:47.449 Per-Namespace SMART Log: Yes 00:07:47.449 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.450 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:47.450 Command Effects Log Page: Supported 00:07:47.450 Get Log Page Extended Data: Supported 00:07:47.450 Telemetry Log Pages: Not Supported 00:07:47.450 Persistent Event Log Pages: Not Supported 00:07:47.450 Supported Log Pages Log Page: May Support 00:07:47.450 Commands Supported & Effects Log Page: Not Supported 00:07:47.450 Feature Identifiers & Effects Log Page:May Support 00:07:47.450 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.450 Data Area 4 for Telemetry Log: Not Supported 00:07:47.450 Error Log Page Entries Supported: 1 00:07:47.450 Keep Alive: Not Supported 00:07:47.450 00:07:47.450 NVM Command Set Attributes 00:07:47.450 ========================== 00:07:47.450 Submission Queue Entry Size 00:07:47.450 Max: 64 00:07:47.450 Min: 64 00:07:47.450 Completion Queue Entry Size 00:07:47.450 Max: 16 00:07:47.450 Min: 16 00:07:47.450 Number of Namespaces: 256 00:07:47.450 Compare Command: Supported 00:07:47.450 Write Uncorrectable Command: Not Supported 00:07:47.450 Dataset Management Command: Supported 00:07:47.450 Write Zeroes Command: Supported 00:07:47.450 Set Features Save Field: Supported 00:07:47.450 Reservations: Not Supported 00:07:47.450 Timestamp: Supported 00:07:47.450 Copy: Supported 00:07:47.450 Volatile Write Cache: Present 00:07:47.450 Atomic Write Unit (Normal): 1 00:07:47.450 Atomic Write Unit (PFail): 1 00:07:47.450 Atomic Compare & Write Unit: 1 00:07:47.450 Fused Compare & Write: Not Supported 00:07:47.450 Scatter-Gather List 00:07:47.450 SGL Command Set: Supported 00:07:47.450 SGL Keyed: Not Supported 00:07:47.450 SGL Bit Bucket Descriptor: Not Supported 00:07:47.450 SGL Metadata Pointer: Not Supported 00:07:47.450 Oversized SGL: Not Supported 00:07:47.450 SGL Metadata Address: Not Supported 00:07:47.450 SGL Offset: Not Supported 00:07:47.450 Transport SGL Data Block: Not Supported 00:07:47.450 Replay Protected Memory Block: Not Supported 00:07:47.450 00:07:47.450 Firmware Slot Information 00:07:47.450 ========================= 00:07:47.450 Active slot: 1 00:07:47.450 Slot 1 Firmware Revision: 1.0 00:07:47.450 00:07:47.450 00:07:47.450 Commands Supported and Effects 00:07:47.450 ============================== 00:07:47.450 Admin Commands 00:07:47.450 -------------- 00:07:47.450 Delete I/O Submission Queue (00h): Supported 00:07:47.450 Create I/O Submission Queue (01h): Supported 00:07:47.450 Get Log Page (02h): Supported 00:07:47.450 Delete I/O Completion Queue (04h): Supported 00:07:47.450 Create I/O Completion Queue (05h): Supported 00:07:47.450 Identify (06h): Supported 00:07:47.450 Abort (08h): Supported 00:07:47.450 Set Features (09h): Supported 00:07:47.450 Get Features (0Ah): Supported 00:07:47.450 Asynchronous Event Request (0Ch): Supported 00:07:47.450 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.450 Directive Send (19h): Supported 00:07:47.450 Directive Receive (1Ah): Supported 00:07:47.450 Virtualization Management (1Ch): Supported 00:07:47.450 Doorbell Buffer Config (7Ch): Supported 00:07:47.450 Format NVM (80h): Supported LBA-Change 00:07:47.450 I/O Commands 00:07:47.450 ------------ 00:07:47.450 Flush (00h): Supported LBA-Change 00:07:47.450 Write (01h): Supported LBA-Change 00:07:47.450 Read (02h): Supported 00:07:47.450 Compare (05h): Supported 00:07:47.450 Write Zeroes (08h): Supported LBA-Change 00:07:47.450 Dataset Management (09h): Supported LBA-Change 00:07:47.450 Unknown (0Ch): Supported 00:07:47.450 Unknown (12h): Supported 00:07:47.450 Copy (19h): Supported LBA-Change 00:07:47.450 Unknown (1Dh): Supported LBA-Change 00:07:47.450 00:07:47.450 Error Log 00:07:47.450 ========= 00:07:47.450 00:07:47.450 Arbitration 00:07:47.450 =========== 00:07:47.450 Arbitration Burst: no limit 00:07:47.450 00:07:47.450 Power Management 00:07:47.450 ================ 00:07:47.450 Number of Power States: 1 00:07:47.450 Current Power State: Power State #0 00:07:47.450 Power State #0: 00:07:47.450 Max Power: 25.00 W 00:07:47.450 Non-Operational State: Operational 00:07:47.450 Entry Latency: 16 microseconds 00:07:47.450 Exit Latency: 4 microseconds 00:07:47.450 Relative Read Throughput: 0 00:07:47.450 Relative Read Latency: 0 00:07:47.450 Relative Write Throughput: 0 00:07:47.450 Relative Write Latency: 0 00:07:47.450 Idle Power: Not Reported 00:07:47.450 Active Power: Not Reported 00:07:47.450 Non-Operational Permissive Mode: Not Supported 00:07:47.450 00:07:47.450 Health Information 00:07:47.450 ================== 00:07:47.450 Critical Warnings: 00:07:47.450 Available Spare Space: OK 00:07:47.450 Temperature: OK 00:07:47.450 Device Reliability: OK 00:07:47.450 Read Only: No 00:07:47.450 Volatile Memory Backup: OK 00:07:47.450 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.450 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.450 Available Spare: 0% 00:07:47.450 Available Spare Threshold: 0% 00:07:47.450 Life Percentage Used: 0% 00:07:47.450 Data Units Read: 999 00:07:47.450 Data Units Written: 866 00:07:47.450 Host Read Commands: 48895 00:07:47.450 Host Write Commands: 47693 00:07:47.450 Controller Busy Time: 0 minutes 00:07:47.450 Power Cycles: 0 00:07:47.450 Power On Hours: 0 hours 00:07:47.450 Unsafe Shutdowns: 0 00:07:47.450 Unrecoverable Media Errors: 0 00:07:47.450 Lifetime Error Log Entries: 0 00:07:47.450 Warning Temperature Time: 0 minutes 00:07:47.450 Critical Temperature Time: 0 minutes 00:07:47.450 00:07:47.450 Number of Queues 00:07:47.450 ================ 00:07:47.450 Number of I/O Submission Queues: 64 00:07:47.450 Number of I/O Completion Queues: 64 00:07:47.450 00:07:47.450 ZNS Specific Controller Data 00:07:47.450 ============================ 00:07:47.450 Zone Append Size Limit: 0 00:07:47.450 00:07:47.450 00:07:47.450 Active Namespaces 00:07:47.450 ================= 00:07:47.450 Namespace ID:1 00:07:47.450 Error Recovery Timeout: Unlimited 00:07:47.450 Command Set Identifier: NVM (00h) 00:07:47.450 Deallocate: Supported 00:07:47.450 Deallocated/Unwritten Error: Supported 00:07:47.450 Deallocated Read Value: All 0x00 00:07:47.450 Deallocate in Write Zeroes: Not Supported 00:07:47.450 Deallocated Guard Field: 0xFFFF 00:07:47.450 Flush: Supported 00:07:47.450 Reservation: Not Supported 00:07:47.450 Namespace Sharing Capabilities: Private 00:07:47.450 Size (in LBAs): 1310720 (5GiB) 00:07:47.450 Capacity (in LBAs): 1310720 (5GiB) 00:07:47.450 Utilization (in LBAs): 1310720 (5GiB) 00:07:47.450 Thin Provisioning: Not Supported 00:07:47.450 Per-NS Atomic Units: No 00:07:47.450 Maximum Single Source Range Length: 128 00:07:47.450 Maximum Copy Length: 128 00:07:47.450 Maximum Source Range Count: 128 00:07:47.450 NGUID/EUI64 Never Reused: No 00:07:47.450 Namespace Write Protected: No 00:07:47.450 Number of LBA Formats: 8 00:07:47.450 Current LBA Format: LBA Format #04 00:07:47.450 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.450 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.450 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.450 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.450 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.450 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.450 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.450 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.450 00:07:47.450 NVM Specific Namespace Data 00:07:47.450 =========================== 00:07:47.450 Logical Block Storage Tag Mask: 0 00:07:47.450 Protection Information Capabilities: 00:07:47.450 16b Guard Protection Information Storage Tag Support: No 00:07:47.450 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.450 Storage Tag Check Read Support: No 00:07:47.450 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.450 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.450 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.450 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.450 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.450 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.450 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.450 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.450 ===================================================== 00:07:47.450 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:47.450 ===================================================== 00:07:47.450 Controller Capabilities/Features 00:07:47.450 ================================ 00:07:47.450 Vendor ID: 1b36 00:07:47.450 Subsystem Vendor ID: 1a[2024-10-15 13:42:01.175409] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 62842 terminated unexpected 00:07:47.450 [2024-10-15 13:42:01.176650] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 62842 terminated unexpected 00:07:47.450 f4 00:07:47.450 Serial Number: 12343 00:07:47.450 Model Number: QEMU NVMe Ctrl 00:07:47.450 Firmware Version: 8.0.0 00:07:47.450 Recommended Arb Burst: 6 00:07:47.450 IEEE OUI Identifier: 00 54 52 00:07:47.450 Multi-path I/O 00:07:47.450 May have multiple subsystem ports: No 00:07:47.450 May have multiple controllers: Yes 00:07:47.450 Associated with SR-IOV VF: No 00:07:47.451 Max Data Transfer Size: 524288 00:07:47.451 Max Number of Namespaces: 256 00:07:47.451 Max Number of I/O Queues: 64 00:07:47.451 NVMe Specification Version (VS): 1.4 00:07:47.451 NVMe Specification Version (Identify): 1.4 00:07:47.451 Maximum Queue Entries: 2048 00:07:47.451 Contiguous Queues Required: Yes 00:07:47.451 Arbitration Mechanisms Supported 00:07:47.451 Weighted Round Robin: Not Supported 00:07:47.451 Vendor Specific: Not Supported 00:07:47.451 Reset Timeout: 7500 ms 00:07:47.451 Doorbell Stride: 4 bytes 00:07:47.451 NVM Subsystem Reset: Not Supported 00:07:47.451 Command Sets Supported 00:07:47.451 NVM Command Set: Supported 00:07:47.451 Boot Partition: Not Supported 00:07:47.451 Memory Page Size Minimum: 4096 bytes 00:07:47.451 Memory Page Size Maximum: 65536 bytes 00:07:47.451 Persistent Memory Region: Not Supported 00:07:47.451 Optional Asynchronous Events Supported 00:07:47.451 Namespace Attribute Notices: Supported 00:07:47.451 Firmware Activation Notices: Not Supported 00:07:47.451 ANA Change Notices: Not Supported 00:07:47.451 PLE Aggregate Log Change Notices: Not Supported 00:07:47.451 LBA Status Info Alert Notices: Not Supported 00:07:47.451 EGE Aggregate Log Change Notices: Not Supported 00:07:47.451 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.451 Zone Descriptor Change Notices: Not Supported 00:07:47.451 Discovery Log Change Notices: Not Supported 00:07:47.451 Controller Attributes 00:07:47.451 128-bit Host Identifier: Not Supported 00:07:47.451 Non-Operational Permissive Mode: Not Supported 00:07:47.451 NVM Sets: Not Supported 00:07:47.451 Read Recovery Levels: Not Supported 00:07:47.451 Endurance Groups: Supported 00:07:47.451 Predictable Latency Mode: Not Supported 00:07:47.451 Traffic Based Keep ALive: Not Supported 00:07:47.451 Namespace Granularity: Not Supported 00:07:47.451 SQ Associations: Not Supported 00:07:47.451 UUID List: Not Supported 00:07:47.451 Multi-Domain Subsystem: Not Supported 00:07:47.451 Fixed Capacity Management: Not Supported 00:07:47.451 Variable Capacity Management: Not Supported 00:07:47.451 Delete Endurance Group: Not Supported 00:07:47.451 Delete NVM Set: Not Supported 00:07:47.451 Extended LBA Formats Supported: Supported 00:07:47.451 Flexible Data Placement Supported: Supported 00:07:47.451 00:07:47.451 Controller Memory Buffer Support 00:07:47.451 ================================ 00:07:47.451 Supported: No 00:07:47.451 00:07:47.451 Persistent Memory Region Support 00:07:47.451 ================================ 00:07:47.451 Supported: No 00:07:47.451 00:07:47.451 Admin Command Set Attributes 00:07:47.451 ============================ 00:07:47.451 Security Send/Receive: Not Supported 00:07:47.451 Format NVM: Supported 00:07:47.451 Firmware Activate/Download: Not Supported 00:07:47.451 Namespace Management: Supported 00:07:47.451 Device Self-Test: Not Supported 00:07:47.451 Directives: Supported 00:07:47.451 NVMe-MI: Not Supported 00:07:47.451 Virtualization Management: Not Supported 00:07:47.451 Doorbell Buffer Config: Supported 00:07:47.451 Get LBA Status Capability: Not Supported 00:07:47.451 Command & Feature Lockdown Capability: Not Supported 00:07:47.451 Abort Command Limit: 4 00:07:47.451 Async Event Request Limit: 4 00:07:47.451 Number of Firmware Slots: N/A 00:07:47.451 Firmware Slot 1 Read-Only: N/A 00:07:47.451 Firmware Activation Without Reset: N/A 00:07:47.451 Multiple Update Detection Support: N/A 00:07:47.451 Firmware Update Granularity: No Information Provided 00:07:47.451 Per-Namespace SMART Log: Yes 00:07:47.451 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.451 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:47.451 Command Effects Log Page: Supported 00:07:47.451 Get Log Page Extended Data: Supported 00:07:47.451 Telemetry Log Pages: Not Supported 00:07:47.451 Persistent Event Log Pages: Not Supported 00:07:47.451 Supported Log Pages Log Page: May Support 00:07:47.451 Commands Supported & Effects Log Page: Not Supported 00:07:47.451 Feature Identifiers & Effects Log Page:May Support 00:07:47.451 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.451 Data Area 4 for Telemetry Log: Not Supported 00:07:47.451 Error Log Page Entries Supported: 1 00:07:47.451 Keep Alive: Not Supported 00:07:47.451 00:07:47.451 NVM Command Set Attributes 00:07:47.451 ========================== 00:07:47.451 Submission Queue Entry Size 00:07:47.451 Max: 64 00:07:47.451 Min: 64 00:07:47.451 Completion Queue Entry Size 00:07:47.451 Max: 16 00:07:47.451 Min: 16 00:07:47.451 Number of Namespaces: 256 00:07:47.451 Compare Command: Supported 00:07:47.451 Write Uncorrectable Command: Not Supported 00:07:47.451 Dataset Management Command: Supported 00:07:47.451 Write Zeroes Command: Supported 00:07:47.451 Set Features Save Field: Supported 00:07:47.451 Reservations: Not Supported 00:07:47.451 Timestamp: Supported 00:07:47.451 Copy: Supported 00:07:47.451 Volatile Write Cache: Present 00:07:47.451 Atomic Write Unit (Normal): 1 00:07:47.451 Atomic Write Unit (PFail): 1 00:07:47.451 Atomic Compare & Write Unit: 1 00:07:47.451 Fused Compare & Write: Not Supported 00:07:47.451 Scatter-Gather List 00:07:47.451 SGL Command Set: Supported 00:07:47.451 SGL Keyed: Not Supported 00:07:47.451 SGL Bit Bucket Descriptor: Not Supported 00:07:47.451 SGL Metadata Pointer: Not Supported 00:07:47.451 Oversized SGL: Not Supported 00:07:47.451 SGL Metadata Address: Not Supported 00:07:47.451 SGL Offset: Not Supported 00:07:47.451 Transport SGL Data Block: Not Supported 00:07:47.451 Replay Protected Memory Block: Not Supported 00:07:47.451 00:07:47.451 Firmware Slot Information 00:07:47.451 ========================= 00:07:47.451 Active slot: 1 00:07:47.451 Slot 1 Firmware Revision: 1.0 00:07:47.451 00:07:47.451 00:07:47.451 Commands Supported and Effects 00:07:47.451 ============================== 00:07:47.451 Admin Commands 00:07:47.451 -------------- 00:07:47.451 Delete I/O Submission Queue (00h): Supported 00:07:47.451 Create I/O Submission Queue (01h): Supported 00:07:47.451 Get Log Page (02h): Supported 00:07:47.451 Delete I/O Completion Queue (04h): Supported 00:07:47.451 Create I/O Completion Queue (05h): Supported 00:07:47.451 Identify (06h): Supported 00:07:47.451 Abort (08h): Supported 00:07:47.451 Set Features (09h): Supported 00:07:47.451 Get Features (0Ah): Supported 00:07:47.451 Asynchronous Event Request (0Ch): Supported 00:07:47.451 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.451 Directive Send (19h): Supported 00:07:47.451 Directive Receive (1Ah): Supported 00:07:47.451 Virtualization Management (1Ch): Supported 00:07:47.451 Doorbell Buffer Config (7Ch): Supported 00:07:47.451 Format NVM (80h): Supported LBA-Change 00:07:47.451 I/O Commands 00:07:47.451 ------------ 00:07:47.451 Flush (00h): Supported LBA-Change 00:07:47.451 Write (01h): Supported LBA-Change 00:07:47.451 Read (02h): Supported 00:07:47.451 Compare (05h): Supported 00:07:47.451 Write Zeroes (08h): Supported LBA-Change 00:07:47.451 Dataset Management (09h): Supported LBA-Change 00:07:47.451 Unknown (0Ch): Supported 00:07:47.451 Unknown (12h): Supported 00:07:47.451 Copy (19h): Supported LBA-Change 00:07:47.451 Unknown (1Dh): Supported LBA-Change 00:07:47.451 00:07:47.451 Error Log 00:07:47.451 ========= 00:07:47.451 00:07:47.451 Arbitration 00:07:47.451 =========== 00:07:47.451 Arbitration Burst: no limit 00:07:47.451 00:07:47.451 Power Management 00:07:47.451 ================ 00:07:47.451 Number of Power States: 1 00:07:47.451 Current Power State: Power State #0 00:07:47.451 Power State #0: 00:07:47.451 Max Power: 25.00 W 00:07:47.451 Non-Operational State: Operational 00:07:47.451 Entry Latency: 16 microseconds 00:07:47.451 Exit Latency: 4 microseconds 00:07:47.451 Relative Read Throughput: 0 00:07:47.451 Relative Read Latency: 0 00:07:47.451 Relative Write Throughput: 0 00:07:47.451 Relative Write Latency: 0 00:07:47.451 Idle Power: Not Reported 00:07:47.451 Active Power: Not Reported 00:07:47.451 Non-Operational Permissive Mode: Not Supported 00:07:47.451 00:07:47.451 Health Information 00:07:47.451 ================== 00:07:47.451 Critical Warnings: 00:07:47.451 Available Spare Space: OK 00:07:47.451 Temperature: OK 00:07:47.451 Device Reliability: OK 00:07:47.451 Read Only: No 00:07:47.451 Volatile Memory Backup: OK 00:07:47.451 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.451 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.451 Available Spare: 0% 00:07:47.451 Available Spare Threshold: 0% 00:07:47.451 Life Percentage Used: 0% 00:07:47.451 Data Units Read: 918 00:07:47.451 Data Units Written: 847 00:07:47.451 Host Read Commands: 36286 00:07:47.451 Host Write Commands: 35709 00:07:47.451 Controller Busy Time: 0 minutes 00:07:47.451 Power Cycles: 0 00:07:47.451 Power On Hours: 0 hours 00:07:47.451 Unsafe Shutdowns: 0 00:07:47.451 Unrecoverable Media Errors: 0 00:07:47.451 Lifetime Error Log Entries: 0 00:07:47.451 Warning Temperature Time: 0 minutes 00:07:47.451 Critical Temperature Time: 0 minutes 00:07:47.451 00:07:47.451 Number of Queues 00:07:47.451 ================ 00:07:47.451 Number of I/O Submission Queues: 64 00:07:47.451 Number of I/O Completion Queues: 64 00:07:47.451 00:07:47.451 ZNS Specific Controller Data 00:07:47.452 ============================ 00:07:47.452 Zone Append Size Limit: 0 00:07:47.452 00:07:47.452 00:07:47.452 Active Namespaces 00:07:47.452 ================= 00:07:47.452 Namespace ID:1 00:07:47.452 Error Recovery Timeout: Unlimited 00:07:47.452 Command Set Identifier: NVM (00h) 00:07:47.452 Deallocate: Supported 00:07:47.452 Deallocated/Unwritten Error: Supported 00:07:47.452 Deallocated Read Value: All 0x00 00:07:47.452 Deallocate in Write Zeroes: Not Supported 00:07:47.452 Deallocated Guard Field: 0xFFFF 00:07:47.452 Flush: Supported 00:07:47.452 Reservation: Not Supported 00:07:47.452 Namespace Sharing Capabilities: Multiple Controllers 00:07:47.452 Size (in LBAs): 262144 (1GiB) 00:07:47.452 Capacity (in LBAs): 262144 (1GiB) 00:07:47.452 Utilization (in LBAs): 262144 (1GiB) 00:07:47.452 Thin Provisioning: Not Supported 00:07:47.452 Per-NS Atomic Units: No 00:07:47.452 Maximum Single Source Range Length: 128 00:07:47.452 Maximum Copy Length: 128 00:07:47.452 Maximum Source Range Count: 128 00:07:47.452 NGUID/EUI64 Never Reused: No 00:07:47.452 Namespace Write Protected: No 00:07:47.452 Endurance group ID: 1 00:07:47.452 Number of LBA Formats: 8 00:07:47.452 Current LBA Format: LBA Format #04 00:07:47.452 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.452 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.452 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.452 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.452 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.452 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.452 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.452 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.452 00:07:47.452 Get Feature FDP: 00:07:47.452 ================ 00:07:47.452 Enabled: Yes 00:07:47.452 FDP configuration index: 0 00:07:47.452 00:07:47.452 FDP configurations log page 00:07:47.452 =========================== 00:07:47.452 Number of FDP configurations: 1 00:07:47.452 Version: 0 00:07:47.452 Size: 112 00:07:47.452 FDP Configuration Descriptor: 0 00:07:47.452 Descriptor Size: 96 00:07:47.452 Reclaim Group Identifier format: 2 00:07:47.452 FDP Volatile Write Cache: Not Present 00:07:47.452 FDP Configuration: Valid 00:07:47.452 Vendor Specific Size: 0 00:07:47.452 Number of Reclaim Groups: 2 00:07:47.452 Number of Recalim Unit Handles: 8 00:07:47.452 Max Placement Identifiers: 128 00:07:47.452 Number of Namespaces Suppprted: 256 00:07:47.452 Reclaim unit Nominal Size: 6000000 bytes 00:07:47.452 Estimated Reclaim Unit Time Limit: Not Reported 00:07:47.452 RUH Desc #000: RUH Type: Initially Isolated 00:07:47.452 RUH Desc #001: RUH Type: Initially Isolated 00:07:47.452 RUH Desc #002: RUH Type: Initially Isolated 00:07:47.452 RUH Desc #003: RUH Type: Initially Isolated 00:07:47.452 RUH Desc #004: RUH Type: Initially Isolated 00:07:47.452 RUH Desc #005: RUH Type: Initially Isolated 00:07:47.452 RUH Desc #006: RUH Type: Initially Isolated 00:07:47.452 RUH Desc #007: RUH Type: Initially Isolated 00:07:47.452 00:07:47.452 FDP reclaim unit handle usage log page 00:07:47.452 ====================================== 00:07:47.452 Number of Reclaim Unit Handles: 8 00:07:47.452 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:47.452 RUH Usage Desc #001: RUH Attributes: Unused 00:07:47.452 RUH Usage Desc #002: RUH Attributes: Unused 00:07:47.452 RUH Usage Desc #003: RUH Attributes: Unused 00:07:47.452 RUH Usage Desc #004: RUH Attributes: Unused 00:07:47.452 RU[2024-10-15 13:42:01.178478] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 62842 terminated unexpected 00:07:47.452 H Usage Desc #005: RUH Attributes: Unused 00:07:47.452 RUH Usage Desc #006: RUH Attributes: Unused 00:07:47.452 RUH Usage Desc #007: RUH Attributes: Unused 00:07:47.452 00:07:47.452 FDP statistics log page 00:07:47.452 ======================= 00:07:47.452 Host bytes with metadata written: 521867264 00:07:47.452 Media bytes with metadata written: 521961472 00:07:47.452 Media bytes erased: 0 00:07:47.452 00:07:47.452 FDP events log page 00:07:47.452 =================== 00:07:47.452 Number of FDP events: 0 00:07:47.452 00:07:47.452 NVM Specific Namespace Data 00:07:47.452 =========================== 00:07:47.452 Logical Block Storage Tag Mask: 0 00:07:47.452 Protection Information Capabilities: 00:07:47.452 16b Guard Protection Information Storage Tag Support: No 00:07:47.452 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.452 Storage Tag Check Read Support: No 00:07:47.452 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.452 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.452 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.452 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.452 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.452 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.452 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.452 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.452 ===================================================== 00:07:47.452 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:47.452 ===================================================== 00:07:47.452 Controller Capabilities/Features 00:07:47.452 ================================ 00:07:47.452 Vendor ID: 1b36 00:07:47.452 Subsystem Vendor ID: 1af4 00:07:47.452 Serial Number: 12340 00:07:47.452 Model Number: QEMU NVMe Ctrl 00:07:47.452 Firmware Version: 8.0.0 00:07:47.452 Recommended Arb Burst: 6 00:07:47.452 IEEE OUI Identifier: 00 54 52 00:07:47.452 Multi-path I/O 00:07:47.452 May have multiple subsystem ports: No 00:07:47.452 May have multiple controllers: No 00:07:47.452 Associated with SR-IOV VF: No 00:07:47.452 Max Data Transfer Size: 524288 00:07:47.452 Max Number of Namespaces: 256 00:07:47.452 Max Number of I/O Queues: 64 00:07:47.452 NVMe Specification Version (VS): 1.4 00:07:47.452 NVMe Specification Version (Identify): 1.4 00:07:47.452 Maximum Queue Entries: 2048 00:07:47.452 Contiguous Queues Required: Yes 00:07:47.452 Arbitration Mechanisms Supported 00:07:47.452 Weighted Round Robin: Not Supported 00:07:47.452 Vendor Specific: Not Supported 00:07:47.452 Reset Timeout: 7500 ms 00:07:47.452 Doorbell Stride: 4 bytes 00:07:47.452 NVM Subsystem Reset: Not Supported 00:07:47.452 Command Sets Supported 00:07:47.452 NVM Command Set: Supported 00:07:47.452 Boot Partition: Not Supported 00:07:47.452 Memory Page Size Minimum: 4096 bytes 00:07:47.452 Memory Page Size Maximum: 65536 bytes 00:07:47.452 Persistent Memory Region: Not Supported 00:07:47.452 Optional Asynchronous Events Supported 00:07:47.452 Namespace Attribute Notices: Supported 00:07:47.452 Firmware Activation Notices: Not Supported 00:07:47.452 ANA Change Notices: Not Supported 00:07:47.452 PLE Aggregate Log Change Notices: Not Supported 00:07:47.452 LBA Status Info Alert Notices: Not Supported 00:07:47.452 EGE Aggregate Log Change Notices: Not Supported 00:07:47.452 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.452 Zone Descriptor Change Notices: Not Supported 00:07:47.452 Discovery Log Change Notices: Not Supported 00:07:47.452 Controller Attributes 00:07:47.452 128-bit Host Identifier: Not Supported 00:07:47.452 Non-Operational Permissive Mode: Not Supported 00:07:47.452 NVM Sets: Not Supported 00:07:47.452 Read Recovery Levels: Not Supported 00:07:47.452 Endurance Groups: Not Supported 00:07:47.452 Predictable Latency Mode: Not Supported 00:07:47.452 Traffic Based Keep ALive: Not Supported 00:07:47.452 Namespace Granularity: Not Supported 00:07:47.452 SQ Associations: Not Supported 00:07:47.452 UUID List: Not Supported 00:07:47.452 Multi-Domain Subsystem: Not Supported 00:07:47.452 Fixed Capacity Management: Not Supported 00:07:47.452 Variable Capacity Management: Not Supported 00:07:47.452 Delete Endurance Group: Not Supported 00:07:47.452 Delete NVM Set: Not Supported 00:07:47.452 Extended LBA Formats Supported: Supported 00:07:47.452 Flexible Data Placement Supported: Not Supported 00:07:47.452 00:07:47.452 Controller Memory Buffer Support 00:07:47.452 ================================ 00:07:47.452 Supported: No 00:07:47.452 00:07:47.452 Persistent Memory Region Support 00:07:47.452 ================================ 00:07:47.452 Supported: No 00:07:47.452 00:07:47.452 Admin Command Set Attributes 00:07:47.452 ============================ 00:07:47.452 Security Send/Receive: Not Supported 00:07:47.452 Format NVM: Supported 00:07:47.452 Firmware Activate/Download: Not Supported 00:07:47.452 Namespace Management: Supported 00:07:47.452 Device Self-Test: Not Supported 00:07:47.452 Directives: Supported 00:07:47.452 NVMe-MI: Not Supported 00:07:47.452 Virtualization Management: Not Supported 00:07:47.452 Doorbell Buffer Config: Supported 00:07:47.452 Get LBA Status Capability: Not Supported 00:07:47.452 Command & Feature Lockdown Capability: Not Supported 00:07:47.452 Abort Command Limit: 4 00:07:47.452 Async Event Request Limit: 4 00:07:47.452 Number of Firmware Slots: N/A 00:07:47.452 Firmware Slot 1 Read-Only: N/A 00:07:47.452 Firmware Activation Without Reset: N/A 00:07:47.452 Multiple Update Detection Support: N/A 00:07:47.452 Firmware Update Granularity: No Information Provided 00:07:47.452 Per-Namespace SMART Log: Yes 00:07:47.453 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.453 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:47.453 Command Effects Log Page: Supported 00:07:47.453 Get Log Page Extended Data: Supported 00:07:47.453 Telemetry Log Pages: Not Supported 00:07:47.453 Persistent Event Log Pages: Not Supported 00:07:47.453 Supported Log Pages Log Page: May Support 00:07:47.453 Commands Supported & Effects Log Page: Not Supported 00:07:47.453 Feature Identifiers & Effects Log Page:May Support 00:07:47.453 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.453 Data Area 4 for Telemetry Log: Not Supported 00:07:47.453 Error Log Page Entries Supported: 1 00:07:47.453 Keep Alive: Not Supported 00:07:47.453 00:07:47.453 NVM Command Set Attributes 00:07:47.453 ========================== 00:07:47.453 Submission Queue Entry Size 00:07:47.453 Max: 64 00:07:47.453 Min: 64 00:07:47.453 Completion Queue Entry Size 00:07:47.453 Max: 16 00:07:47.453 Min: 16 00:07:47.453 Number of Namespaces: 256 00:07:47.453 Compare Command: Supported 00:07:47.453 Write Uncorrectable Command: Not Supported 00:07:47.453 Dataset Management Command: Supported 00:07:47.453 Write Zeroes Command: Supported 00:07:47.453 Set Features Save Field: Supported 00:07:47.453 Reservations: Not Supported 00:07:47.453 Timestamp: Supported 00:07:47.453 Copy: Supported 00:07:47.453 Volatile Write Cache: Present 00:07:47.453 Atomic Write Unit (Normal): 1 00:07:47.453 Atomic Write Unit (PFail): 1 00:07:47.453 Atomic Compare & Write Unit: 1 00:07:47.453 Fused Compare & Write: Not Supported 00:07:47.453 Scatter-Gather List 00:07:47.453 SGL Command Set: Supported 00:07:47.453 SGL Keyed: Not Supported 00:07:47.453 SGL Bit Bucket Descriptor: Not Supported 00:07:47.453 SGL Metadata Pointer: Not Supported 00:07:47.453 Oversized SGL: Not Supported 00:07:47.453 SGL Metadata Address: Not Supported 00:07:47.453 SGL Offset: Not Supported 00:07:47.453 Transport SGL Data Block: Not Supported 00:07:47.453 Replay Protected Memory Block: Not Supported 00:07:47.453 00:07:47.453 Firmware Slot Information 00:07:47.453 ========================= 00:07:47.453 Active slot: 1 00:07:47.453 Slot 1 Firmware Revision: 1.0 00:07:47.453 00:07:47.453 00:07:47.453 Commands Supported and Effects 00:07:47.453 ============================== 00:07:47.453 Admin Commands 00:07:47.453 -------------- 00:07:47.453 Delete I/O Submission Queue (00h): Supported 00:07:47.453 Create I/O Submission Queue (01h): Supported 00:07:47.453 Get Log Page (02h): Supported 00:07:47.453 Delete I/O Completion Queue (04h): Supported 00:07:47.453 Create I/O Completion Queue (05h): Supported 00:07:47.453 Identify (06h): Supported 00:07:47.453 Abort (08h): Supported 00:07:47.453 Set Features (09h): Supported 00:07:47.453 Get Features (0Ah): Supported 00:07:47.453 Asynchronous Event Request (0Ch): Supported 00:07:47.453 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.453 Directive Send (19h): Supported 00:07:47.453 Directive Receive (1Ah): Supported 00:07:47.453 Virtualization Management (1Ch): Supported 00:07:47.453 Doorbell Buffer Config (7Ch): Supported 00:07:47.453 Format NVM (80h): Supported LBA-Change 00:07:47.453 I/O Commands 00:07:47.453 ------------ 00:07:47.453 Flush (00h): Supported LBA-Change 00:07:47.453 Write (01h): Supported LBA-Change 00:07:47.453 Read (02h): Supported 00:07:47.453 Compare (05h): Supported 00:07:47.453 Write Zeroes (08h): Supported LBA-Change 00:07:47.453 Dataset Management (09h): Supported LBA-Change 00:07:47.453 Unknown (0Ch): Supported 00:07:47.453 Unknown (12h): Supported 00:07:47.453 Copy (19h): Supported LBA-Change 00:07:47.453 Unknown (1Dh): Supported LBA-Change 00:07:47.453 00:07:47.453 Error Log 00:07:47.453 ========= 00:07:47.453 00:07:47.453 Arbitration 00:07:47.453 =========== 00:07:47.453 Arbitration Burst: no limit 00:07:47.453 00:07:47.453 Power Management 00:07:47.453 ================ 00:07:47.453 Number of Power States: 1 00:07:47.453 Current Power State: Power State #0 00:07:47.453 Power State #0: 00:07:47.453 Max Power: 25.00 W 00:07:47.453 Non-Operational State: Operational 00:07:47.453 Entry Latency: 16 microseconds 00:07:47.453 Exit Latency: 4 microseconds 00:07:47.453 Relative Read Throughput: 0 00:07:47.453 Relative Read Latency: 0 00:07:47.453 Relative Write Throughput: 0 00:07:47.453 Relative Write Latency: 0 00:07:47.453 Idle Power: Not Reported 00:07:47.453 Active Power: Not Reported 00:07:47.453 Non-Operational Permissive Mode: Not Supported 00:07:47.453 00:07:47.453 Health Information 00:07:47.453 ================== 00:07:47.453 Critical Warnings: 00:07:47.453 Available Spare Space: OK 00:07:47.453 Temperature: OK 00:07:47.453 Device Reliability: OK 00:07:47.453 Read Only: No 00:07:47.453 Volatile Memory Backup: OK 00:07:47.453 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.453 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.453 Available Spare: 0% 00:07:47.453 Available Spare Threshold: 0% 00:07:47.453 Life Percentage Used: 0% 00:07:47.453 Data Units Read: 654 00:07:47.453 Data Units Written: 582 00:07:47.453 Host Read Commands: 33808 00:07:47.453 Host Write Commands: 33594 00:07:47.453 Controller Busy Time: 0 minutes 00:07:47.453 Power Cycles: 0 00:07:47.453 Power On Hours: 0 hours 00:07:47.453 Unsafe Shutdowns: 0 00:07:47.453 Unrecoverable Media Errors: 0 00:07:47.453 Lifetime Error Log Entries: 0 00:07:47.453 Warning Temperature Time: 0 minutes 00:07:47.453 Critical Temperature Time: 0 minutes 00:07:47.453 00:07:47.453 Number of Queues 00:07:47.453 ================ 00:07:47.453 Number of I/O Submission Queues: 64 00:07:47.453 Number of I/O Completion Queues: 64 00:07:47.453 00:07:47.453 ZNS Specific Controller Data 00:07:47.453 ============================ 00:07:47.453 Zone Append Size Limit: 0 00:07:47.453 00:07:47.453 00:07:47.453 Active Namespaces 00:07:47.453 ================= 00:07:47.453 Namespace ID:1 00:07:47.453 Error Recovery Timeout: Unlimited 00:07:47.453 Command Set Identifier: NVM (00h) 00:07:47.453 Deallocate: Supported 00:07:47.453 Deallocated/Unwritten Error: Supported 00:07:47.453 Deallocated Read Value: All 0x00 00:07:47.453 Deallocate in Write Zeroes: Not Supported 00:07:47.453 Deallocated Guard Field: 0xFFFF 00:07:47.453 Flush: Supported 00:07:47.453 Reservation: Not Supported 00:07:47.453 Metadata Transferred as: Separate Metadata Buffer 00:07:47.453 Namespace Sharing Capabilities: Private 00:07:47.453 Size (in LBAs): 1548666 (5GiB) 00:07:47.453 Capacity (in LBAs): 1548666 (5GiB) 00:07:47.453 Utilization (in LBAs): 1548666 (5GiB) 00:07:47.453 Thin Provisioning: Not Supported 00:07:47.453 Per-NS Atomic Units: No 00:07:47.453 Maximum Single Source Range Length: 128 00:07:47.453 Maximum Copy Length: 128 00:07:47.453 Maximum Source Range Count: 128 00:07:47.453 NGUID/EUI64 Never Reused: No 00:07:47.453 Namespace Write Protected: No 00:07:47.453 Number of LBA Formats: 8 00:07:47.453 Current LBA Format: LBA Format #07 00:07:47.453 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.453 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.453 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.453 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.453 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.453 LBA Format[2024-10-15 13:42:01.179318] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 62842 terminated unexpected 00:07:47.453 #05: Data Size: 4096 Metadata Size: 8 00:07:47.453 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.453 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.453 00:07:47.453 NVM Specific Namespace Data 00:07:47.453 =========================== 00:07:47.453 Logical Block Storage Tag Mask: 0 00:07:47.453 Protection Information Capabilities: 00:07:47.453 16b Guard Protection Information Storage Tag Support: No 00:07:47.453 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.453 Storage Tag Check Read Support: No 00:07:47.453 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.453 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.453 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.453 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.453 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.453 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.453 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.453 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.453 ===================================================== 00:07:47.453 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:47.453 ===================================================== 00:07:47.453 Controller Capabilities/Features 00:07:47.453 ================================ 00:07:47.453 Vendor ID: 1b36 00:07:47.453 Subsystem Vendor ID: 1af4 00:07:47.453 Serial Number: 12342 00:07:47.453 Model Number: QEMU NVMe Ctrl 00:07:47.453 Firmware Version: 8.0.0 00:07:47.453 Recommended Arb Burst: 6 00:07:47.453 IEEE OUI Identifier: 00 54 52 00:07:47.453 Multi-path I/O 00:07:47.453 May have multiple subsystem ports: No 00:07:47.453 May have multiple controllers: No 00:07:47.453 Associated with SR-IOV VF: No 00:07:47.453 Max Data Transfer Size: 524288 00:07:47.453 Max Number of Namespaces: 256 00:07:47.453 Max Number of I/O Queues: 64 00:07:47.453 NVMe Specification Version (VS): 1.4 00:07:47.453 NVMe Specification Version (Identify): 1.4 00:07:47.454 Maximum Queue Entries: 2048 00:07:47.454 Contiguous Queues Required: Yes 00:07:47.454 Arbitration Mechanisms Supported 00:07:47.454 Weighted Round Robin: Not Supported 00:07:47.454 Vendor Specific: Not Supported 00:07:47.454 Reset Timeout: 7500 ms 00:07:47.454 Doorbell Stride: 4 bytes 00:07:47.454 NVM Subsystem Reset: Not Supported 00:07:47.454 Command Sets Supported 00:07:47.454 NVM Command Set: Supported 00:07:47.454 Boot Partition: Not Supported 00:07:47.454 Memory Page Size Minimum: 4096 bytes 00:07:47.454 Memory Page Size Maximum: 65536 bytes 00:07:47.454 Persistent Memory Region: Not Supported 00:07:47.454 Optional Asynchronous Events Supported 00:07:47.454 Namespace Attribute Notices: Supported 00:07:47.454 Firmware Activation Notices: Not Supported 00:07:47.454 ANA Change Notices: Not Supported 00:07:47.454 PLE Aggregate Log Change Notices: Not Supported 00:07:47.454 LBA Status Info Alert Notices: Not Supported 00:07:47.454 EGE Aggregate Log Change Notices: Not Supported 00:07:47.454 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.454 Zone Descriptor Change Notices: Not Supported 00:07:47.454 Discovery Log Change Notices: Not Supported 00:07:47.454 Controller Attributes 00:07:47.454 128-bit Host Identifier: Not Supported 00:07:47.454 Non-Operational Permissive Mode: Not Supported 00:07:47.454 NVM Sets: Not Supported 00:07:47.454 Read Recovery Levels: Not Supported 00:07:47.454 Endurance Groups: Not Supported 00:07:47.454 Predictable Latency Mode: Not Supported 00:07:47.454 Traffic Based Keep ALive: Not Supported 00:07:47.454 Namespace Granularity: Not Supported 00:07:47.454 SQ Associations: Not Supported 00:07:47.454 UUID List: Not Supported 00:07:47.454 Multi-Domain Subsystem: Not Supported 00:07:47.454 Fixed Capacity Management: Not Supported 00:07:47.454 Variable Capacity Management: Not Supported 00:07:47.454 Delete Endurance Group: Not Supported 00:07:47.454 Delete NVM Set: Not Supported 00:07:47.454 Extended LBA Formats Supported: Supported 00:07:47.454 Flexible Data Placement Supported: Not Supported 00:07:47.454 00:07:47.454 Controller Memory Buffer Support 00:07:47.454 ================================ 00:07:47.454 Supported: No 00:07:47.454 00:07:47.454 Persistent Memory Region Support 00:07:47.454 ================================ 00:07:47.454 Supported: No 00:07:47.454 00:07:47.454 Admin Command Set Attributes 00:07:47.454 ============================ 00:07:47.454 Security Send/Receive: Not Supported 00:07:47.454 Format NVM: Supported 00:07:47.454 Firmware Activate/Download: Not Supported 00:07:47.454 Namespace Management: Supported 00:07:47.454 Device Self-Test: Not Supported 00:07:47.454 Directives: Supported 00:07:47.454 NVMe-MI: Not Supported 00:07:47.454 Virtualization Management: Not Supported 00:07:47.454 Doorbell Buffer Config: Supported 00:07:47.454 Get LBA Status Capability: Not Supported 00:07:47.454 Command & Feature Lockdown Capability: Not Supported 00:07:47.454 Abort Command Limit: 4 00:07:47.454 Async Event Request Limit: 4 00:07:47.454 Number of Firmware Slots: N/A 00:07:47.454 Firmware Slot 1 Read-Only: N/A 00:07:47.454 Firmware Activation Without Reset: N/A 00:07:47.454 Multiple Update Detection Support: N/A 00:07:47.454 Firmware Update Granularity: No Information Provided 00:07:47.454 Per-Namespace SMART Log: Yes 00:07:47.454 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.454 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:47.454 Command Effects Log Page: Supported 00:07:47.454 Get Log Page Extended Data: Supported 00:07:47.454 Telemetry Log Pages: Not Supported 00:07:47.454 Persistent Event Log Pages: Not Supported 00:07:47.454 Supported Log Pages Log Page: May Support 00:07:47.454 Commands Supported & Effects Log Page: Not Supported 00:07:47.454 Feature Identifiers & Effects Log Page:May Support 00:07:47.454 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.454 Data Area 4 for Telemetry Log: Not Supported 00:07:47.454 Error Log Page Entries Supported: 1 00:07:47.454 Keep Alive: Not Supported 00:07:47.454 00:07:47.454 NVM Command Set Attributes 00:07:47.454 ========================== 00:07:47.454 Submission Queue Entry Size 00:07:47.454 Max: 64 00:07:47.454 Min: 64 00:07:47.454 Completion Queue Entry Size 00:07:47.454 Max: 16 00:07:47.454 Min: 16 00:07:47.454 Number of Namespaces: 256 00:07:47.454 Compare Command: Supported 00:07:47.454 Write Uncorrectable Command: Not Supported 00:07:47.454 Dataset Management Command: Supported 00:07:47.454 Write Zeroes Command: Supported 00:07:47.454 Set Features Save Field: Supported 00:07:47.454 Reservations: Not Supported 00:07:47.454 Timestamp: Supported 00:07:47.454 Copy: Supported 00:07:47.454 Volatile Write Cache: Present 00:07:47.454 Atomic Write Unit (Normal): 1 00:07:47.454 Atomic Write Unit (PFail): 1 00:07:47.454 Atomic Compare & Write Unit: 1 00:07:47.454 Fused Compare & Write: Not Supported 00:07:47.454 Scatter-Gather List 00:07:47.454 SGL Command Set: Supported 00:07:47.454 SGL Keyed: Not Supported 00:07:47.454 SGL Bit Bucket Descriptor: Not Supported 00:07:47.454 SGL Metadata Pointer: Not Supported 00:07:47.454 Oversized SGL: Not Supported 00:07:47.454 SGL Metadata Address: Not Supported 00:07:47.454 SGL Offset: Not Supported 00:07:47.454 Transport SGL Data Block: Not Supported 00:07:47.454 Replay Protected Memory Block: Not Supported 00:07:47.454 00:07:47.454 Firmware Slot Information 00:07:47.454 ========================= 00:07:47.454 Active slot: 1 00:07:47.454 Slot 1 Firmware Revision: 1.0 00:07:47.454 00:07:47.454 00:07:47.454 Commands Supported and Effects 00:07:47.454 ============================== 00:07:47.454 Admin Commands 00:07:47.454 -------------- 00:07:47.454 Delete I/O Submission Queue (00h): Supported 00:07:47.454 Create I/O Submission Queue (01h): Supported 00:07:47.454 Get Log Page (02h): Supported 00:07:47.454 Delete I/O Completion Queue (04h): Supported 00:07:47.454 Create I/O Completion Queue (05h): Supported 00:07:47.454 Identify (06h): Supported 00:07:47.454 Abort (08h): Supported 00:07:47.454 Set Features (09h): Supported 00:07:47.454 Get Features (0Ah): Supported 00:07:47.454 Asynchronous Event Request (0Ch): Supported 00:07:47.454 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.454 Directive Send (19h): Supported 00:07:47.454 Directive Receive (1Ah): Supported 00:07:47.454 Virtualization Management (1Ch): Supported 00:07:47.454 Doorbell Buffer Config (7Ch): Supported 00:07:47.454 Format NVM (80h): Supported LBA-Change 00:07:47.454 I/O Commands 00:07:47.454 ------------ 00:07:47.454 Flush (00h): Supported LBA-Change 00:07:47.454 Write (01h): Supported LBA-Change 00:07:47.454 Read (02h): Supported 00:07:47.454 Compare (05h): Supported 00:07:47.454 Write Zeroes (08h): Supported LBA-Change 00:07:47.454 Dataset Management (09h): Supported LBA-Change 00:07:47.454 Unknown (0Ch): Supported 00:07:47.454 Unknown (12h): Supported 00:07:47.454 Copy (19h): Supported LBA-Change 00:07:47.454 Unknown (1Dh): Supported LBA-Change 00:07:47.454 00:07:47.454 Error Log 00:07:47.454 ========= 00:07:47.454 00:07:47.454 Arbitration 00:07:47.454 =========== 00:07:47.454 Arbitration Burst: no limit 00:07:47.454 00:07:47.454 Power Management 00:07:47.454 ================ 00:07:47.454 Number of Power States: 1 00:07:47.454 Current Power State: Power State #0 00:07:47.454 Power State #0: 00:07:47.454 Max Power: 25.00 W 00:07:47.454 Non-Operational State: Operational 00:07:47.454 Entry Latency: 16 microseconds 00:07:47.454 Exit Latency: 4 microseconds 00:07:47.454 Relative Read Throughput: 0 00:07:47.454 Relative Read Latency: 0 00:07:47.454 Relative Write Throughput: 0 00:07:47.454 Relative Write Latency: 0 00:07:47.454 Idle Power: Not Reported 00:07:47.455 Active Power: Not Reported 00:07:47.455 Non-Operational Permissive Mode: Not Supported 00:07:47.455 00:07:47.455 Health Information 00:07:47.455 ================== 00:07:47.455 Critical Warnings: 00:07:47.455 Available Spare Space: OK 00:07:47.455 Temperature: OK 00:07:47.455 Device Reliability: OK 00:07:47.455 Read Only: No 00:07:47.455 Volatile Memory Backup: OK 00:07:47.455 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.455 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.455 Available Spare: 0% 00:07:47.455 Available Spare Threshold: 0% 00:07:47.455 Life Percentage Used: 0% 00:07:47.455 Data Units Read: 2182 00:07:47.455 Data Units Written: 1969 00:07:47.455 Host Read Commands: 104289 00:07:47.455 Host Write Commands: 102558 00:07:47.455 Controller Busy Time: 0 minutes 00:07:47.455 Power Cycles: 0 00:07:47.455 Power On Hours: 0 hours 00:07:47.455 Unsafe Shutdowns: 0 00:07:47.455 Unrecoverable Media Errors: 0 00:07:47.455 Lifetime Error Log Entries: 0 00:07:47.455 Warning Temperature Time: 0 minutes 00:07:47.455 Critical Temperature Time: 0 minutes 00:07:47.455 00:07:47.455 Number of Queues 00:07:47.455 ================ 00:07:47.455 Number of I/O Submission Queues: 64 00:07:47.455 Number of I/O Completion Queues: 64 00:07:47.455 00:07:47.455 ZNS Specific Controller Data 00:07:47.455 ============================ 00:07:47.455 Zone Append Size Limit: 0 00:07:47.455 00:07:47.455 00:07:47.455 Active Namespaces 00:07:47.455 ================= 00:07:47.455 Namespace ID:1 00:07:47.455 Error Recovery Timeout: Unlimited 00:07:47.455 Command Set Identifier: NVM (00h) 00:07:47.455 Deallocate: Supported 00:07:47.455 Deallocated/Unwritten Error: Supported 00:07:47.455 Deallocated Read Value: All 0x00 00:07:47.455 Deallocate in Write Zeroes: Not Supported 00:07:47.455 Deallocated Guard Field: 0xFFFF 00:07:47.455 Flush: Supported 00:07:47.455 Reservation: Not Supported 00:07:47.455 Namespace Sharing Capabilities: Private 00:07:47.455 Size (in LBAs): 1048576 (4GiB) 00:07:47.455 Capacity (in LBAs): 1048576 (4GiB) 00:07:47.455 Utilization (in LBAs): 1048576 (4GiB) 00:07:47.455 Thin Provisioning: Not Supported 00:07:47.455 Per-NS Atomic Units: No 00:07:47.455 Maximum Single Source Range Length: 128 00:07:47.455 Maximum Copy Length: 128 00:07:47.455 Maximum Source Range Count: 128 00:07:47.455 NGUID/EUI64 Never Reused: No 00:07:47.455 Namespace Write Protected: No 00:07:47.455 Number of LBA Formats: 8 00:07:47.455 Current LBA Format: LBA Format #04 00:07:47.455 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.455 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.455 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.455 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.455 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.455 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.455 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.455 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.455 00:07:47.455 NVM Specific Namespace Data 00:07:47.455 =========================== 00:07:47.455 Logical Block Storage Tag Mask: 0 00:07:47.455 Protection Information Capabilities: 00:07:47.455 16b Guard Protection Information Storage Tag Support: No 00:07:47.455 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.455 Storage Tag Check Read Support: No 00:07:47.455 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Namespace ID:2 00:07:47.455 Error Recovery Timeout: Unlimited 00:07:47.455 Command Set Identifier: NVM (00h) 00:07:47.455 Deallocate: Supported 00:07:47.455 Deallocated/Unwritten Error: Supported 00:07:47.455 Deallocated Read Value: All 0x00 00:07:47.455 Deallocate in Write Zeroes: Not Supported 00:07:47.455 Deallocated Guard Field: 0xFFFF 00:07:47.455 Flush: Supported 00:07:47.455 Reservation: Not Supported 00:07:47.455 Namespace Sharing Capabilities: Private 00:07:47.455 Size (in LBAs): 1048576 (4GiB) 00:07:47.455 Capacity (in LBAs): 1048576 (4GiB) 00:07:47.455 Utilization (in LBAs): 1048576 (4GiB) 00:07:47.455 Thin Provisioning: Not Supported 00:07:47.455 Per-NS Atomic Units: No 00:07:47.455 Maximum Single Source Range Length: 128 00:07:47.455 Maximum Copy Length: 128 00:07:47.455 Maximum Source Range Count: 128 00:07:47.455 NGUID/EUI64 Never Reused: No 00:07:47.455 Namespace Write Protected: No 00:07:47.455 Number of LBA Formats: 8 00:07:47.455 Current LBA Format: LBA Format #04 00:07:47.455 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.455 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.455 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.455 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.455 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.455 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.455 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.455 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.455 00:07:47.455 NVM Specific Namespace Data 00:07:47.455 =========================== 00:07:47.455 Logical Block Storage Tag Mask: 0 00:07:47.455 Protection Information Capabilities: 00:07:47.455 16b Guard Protection Information Storage Tag Support: No 00:07:47.455 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.455 Storage Tag Check Read Support: No 00:07:47.455 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Namespace ID:3 00:07:47.455 Error Recovery Timeout: Unlimited 00:07:47.455 Command Set Identifier: NVM (00h) 00:07:47.455 Deallocate: Supported 00:07:47.455 Deallocated/Unwritten Error: Supported 00:07:47.455 Deallocated Read Value: All 0x00 00:07:47.455 Deallocate in Write Zeroes: Not Supported 00:07:47.455 Deallocated Guard Field: 0xFFFF 00:07:47.455 Flush: Supported 00:07:47.455 Reservation: Not Supported 00:07:47.455 Namespace Sharing Capabilities: Private 00:07:47.455 Size (in LBAs): 1048576 (4GiB) 00:07:47.455 Capacity (in LBAs): 1048576 (4GiB) 00:07:47.455 Utilization (in LBAs): 1048576 (4GiB) 00:07:47.455 Thin Provisioning: Not Supported 00:07:47.455 Per-NS Atomic Units: No 00:07:47.455 Maximum Single Source Range Length: 128 00:07:47.455 Maximum Copy Length: 128 00:07:47.455 Maximum Source Range Count: 128 00:07:47.455 NGUID/EUI64 Never Reused: No 00:07:47.455 Namespace Write Protected: No 00:07:47.455 Number of LBA Formats: 8 00:07:47.455 Current LBA Format: LBA Format #04 00:07:47.455 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.455 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.455 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.455 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.455 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.455 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.455 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.455 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.455 00:07:47.455 NVM Specific Namespace Data 00:07:47.455 =========================== 00:07:47.455 Logical Block Storage Tag Mask: 0 00:07:47.455 Protection Information Capabilities: 00:07:47.455 16b Guard Protection Information Storage Tag Support: No 00:07:47.455 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.455 Storage Tag Check Read Support: No 00:07:47.455 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.455 13:42:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:47.455 13:42:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:47.718 ===================================================== 00:07:47.718 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:47.718 ===================================================== 00:07:47.718 Controller Capabilities/Features 00:07:47.718 ================================ 00:07:47.718 Vendor ID: 1b36 00:07:47.718 Subsystem Vendor ID: 1af4 00:07:47.718 Serial Number: 12340 00:07:47.718 Model Number: QEMU NVMe Ctrl 00:07:47.718 Firmware Version: 8.0.0 00:07:47.718 Recommended Arb Burst: 6 00:07:47.718 IEEE OUI Identifier: 00 54 52 00:07:47.718 Multi-path I/O 00:07:47.718 May have multiple subsystem ports: No 00:07:47.718 May have multiple controllers: No 00:07:47.718 Associated with SR-IOV VF: No 00:07:47.718 Max Data Transfer Size: 524288 00:07:47.718 Max Number of Namespaces: 256 00:07:47.718 Max Number of I/O Queues: 64 00:07:47.718 NVMe Specification Version (VS): 1.4 00:07:47.718 NVMe Specification Version (Identify): 1.4 00:07:47.718 Maximum Queue Entries: 2048 00:07:47.718 Contiguous Queues Required: Yes 00:07:47.718 Arbitration Mechanisms Supported 00:07:47.718 Weighted Round Robin: Not Supported 00:07:47.718 Vendor Specific: Not Supported 00:07:47.718 Reset Timeout: 7500 ms 00:07:47.718 Doorbell Stride: 4 bytes 00:07:47.718 NVM Subsystem Reset: Not Supported 00:07:47.718 Command Sets Supported 00:07:47.718 NVM Command Set: Supported 00:07:47.718 Boot Partition: Not Supported 00:07:47.718 Memory Page Size Minimum: 4096 bytes 00:07:47.718 Memory Page Size Maximum: 65536 bytes 00:07:47.718 Persistent Memory Region: Not Supported 00:07:47.718 Optional Asynchronous Events Supported 00:07:47.718 Namespace Attribute Notices: Supported 00:07:47.718 Firmware Activation Notices: Not Supported 00:07:47.718 ANA Change Notices: Not Supported 00:07:47.718 PLE Aggregate Log Change Notices: Not Supported 00:07:47.718 LBA Status Info Alert Notices: Not Supported 00:07:47.718 EGE Aggregate Log Change Notices: Not Supported 00:07:47.718 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.718 Zone Descriptor Change Notices: Not Supported 00:07:47.718 Discovery Log Change Notices: Not Supported 00:07:47.718 Controller Attributes 00:07:47.718 128-bit Host Identifier: Not Supported 00:07:47.718 Non-Operational Permissive Mode: Not Supported 00:07:47.718 NVM Sets: Not Supported 00:07:47.718 Read Recovery Levels: Not Supported 00:07:47.718 Endurance Groups: Not Supported 00:07:47.718 Predictable Latency Mode: Not Supported 00:07:47.718 Traffic Based Keep ALive: Not Supported 00:07:47.719 Namespace Granularity: Not Supported 00:07:47.719 SQ Associations: Not Supported 00:07:47.719 UUID List: Not Supported 00:07:47.719 Multi-Domain Subsystem: Not Supported 00:07:47.719 Fixed Capacity Management: Not Supported 00:07:47.719 Variable Capacity Management: Not Supported 00:07:47.719 Delete Endurance Group: Not Supported 00:07:47.719 Delete NVM Set: Not Supported 00:07:47.719 Extended LBA Formats Supported: Supported 00:07:47.719 Flexible Data Placement Supported: Not Supported 00:07:47.719 00:07:47.719 Controller Memory Buffer Support 00:07:47.719 ================================ 00:07:47.719 Supported: No 00:07:47.719 00:07:47.719 Persistent Memory Region Support 00:07:47.719 ================================ 00:07:47.719 Supported: No 00:07:47.719 00:07:47.719 Admin Command Set Attributes 00:07:47.719 ============================ 00:07:47.719 Security Send/Receive: Not Supported 00:07:47.719 Format NVM: Supported 00:07:47.719 Firmware Activate/Download: Not Supported 00:07:47.719 Namespace Management: Supported 00:07:47.719 Device Self-Test: Not Supported 00:07:47.719 Directives: Supported 00:07:47.719 NVMe-MI: Not Supported 00:07:47.719 Virtualization Management: Not Supported 00:07:47.719 Doorbell Buffer Config: Supported 00:07:47.719 Get LBA Status Capability: Not Supported 00:07:47.719 Command & Feature Lockdown Capability: Not Supported 00:07:47.719 Abort Command Limit: 4 00:07:47.719 Async Event Request Limit: 4 00:07:47.719 Number of Firmware Slots: N/A 00:07:47.719 Firmware Slot 1 Read-Only: N/A 00:07:47.719 Firmware Activation Without Reset: N/A 00:07:47.719 Multiple Update Detection Support: N/A 00:07:47.719 Firmware Update Granularity: No Information Provided 00:07:47.719 Per-Namespace SMART Log: Yes 00:07:47.719 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.719 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:47.719 Command Effects Log Page: Supported 00:07:47.719 Get Log Page Extended Data: Supported 00:07:47.719 Telemetry Log Pages: Not Supported 00:07:47.719 Persistent Event Log Pages: Not Supported 00:07:47.719 Supported Log Pages Log Page: May Support 00:07:47.719 Commands Supported & Effects Log Page: Not Supported 00:07:47.719 Feature Identifiers & Effects Log Page:May Support 00:07:47.719 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.719 Data Area 4 for Telemetry Log: Not Supported 00:07:47.719 Error Log Page Entries Supported: 1 00:07:47.719 Keep Alive: Not Supported 00:07:47.719 00:07:47.719 NVM Command Set Attributes 00:07:47.719 ========================== 00:07:47.719 Submission Queue Entry Size 00:07:47.719 Max: 64 00:07:47.719 Min: 64 00:07:47.719 Completion Queue Entry Size 00:07:47.719 Max: 16 00:07:47.719 Min: 16 00:07:47.719 Number of Namespaces: 256 00:07:47.719 Compare Command: Supported 00:07:47.719 Write Uncorrectable Command: Not Supported 00:07:47.719 Dataset Management Command: Supported 00:07:47.719 Write Zeroes Command: Supported 00:07:47.719 Set Features Save Field: Supported 00:07:47.719 Reservations: Not Supported 00:07:47.719 Timestamp: Supported 00:07:47.719 Copy: Supported 00:07:47.719 Volatile Write Cache: Present 00:07:47.719 Atomic Write Unit (Normal): 1 00:07:47.719 Atomic Write Unit (PFail): 1 00:07:47.719 Atomic Compare & Write Unit: 1 00:07:47.719 Fused Compare & Write: Not Supported 00:07:47.719 Scatter-Gather List 00:07:47.719 SGL Command Set: Supported 00:07:47.719 SGL Keyed: Not Supported 00:07:47.719 SGL Bit Bucket Descriptor: Not Supported 00:07:47.719 SGL Metadata Pointer: Not Supported 00:07:47.719 Oversized SGL: Not Supported 00:07:47.719 SGL Metadata Address: Not Supported 00:07:47.719 SGL Offset: Not Supported 00:07:47.719 Transport SGL Data Block: Not Supported 00:07:47.719 Replay Protected Memory Block: Not Supported 00:07:47.719 00:07:47.719 Firmware Slot Information 00:07:47.719 ========================= 00:07:47.719 Active slot: 1 00:07:47.719 Slot 1 Firmware Revision: 1.0 00:07:47.719 00:07:47.719 00:07:47.719 Commands Supported and Effects 00:07:47.719 ============================== 00:07:47.719 Admin Commands 00:07:47.719 -------------- 00:07:47.719 Delete I/O Submission Queue (00h): Supported 00:07:47.719 Create I/O Submission Queue (01h): Supported 00:07:47.719 Get Log Page (02h): Supported 00:07:47.719 Delete I/O Completion Queue (04h): Supported 00:07:47.719 Create I/O Completion Queue (05h): Supported 00:07:47.719 Identify (06h): Supported 00:07:47.719 Abort (08h): Supported 00:07:47.719 Set Features (09h): Supported 00:07:47.719 Get Features (0Ah): Supported 00:07:47.719 Asynchronous Event Request (0Ch): Supported 00:07:47.719 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.719 Directive Send (19h): Supported 00:07:47.719 Directive Receive (1Ah): Supported 00:07:47.719 Virtualization Management (1Ch): Supported 00:07:47.719 Doorbell Buffer Config (7Ch): Supported 00:07:47.719 Format NVM (80h): Supported LBA-Change 00:07:47.719 I/O Commands 00:07:47.719 ------------ 00:07:47.719 Flush (00h): Supported LBA-Change 00:07:47.719 Write (01h): Supported LBA-Change 00:07:47.719 Read (02h): Supported 00:07:47.719 Compare (05h): Supported 00:07:47.719 Write Zeroes (08h): Supported LBA-Change 00:07:47.719 Dataset Management (09h): Supported LBA-Change 00:07:47.719 Unknown (0Ch): Supported 00:07:47.719 Unknown (12h): Supported 00:07:47.719 Copy (19h): Supported LBA-Change 00:07:47.719 Unknown (1Dh): Supported LBA-Change 00:07:47.719 00:07:47.719 Error Log 00:07:47.719 ========= 00:07:47.719 00:07:47.719 Arbitration 00:07:47.719 =========== 00:07:47.719 Arbitration Burst: no limit 00:07:47.719 00:07:47.719 Power Management 00:07:47.719 ================ 00:07:47.719 Number of Power States: 1 00:07:47.719 Current Power State: Power State #0 00:07:47.719 Power State #0: 00:07:47.719 Max Power: 25.00 W 00:07:47.719 Non-Operational State: Operational 00:07:47.719 Entry Latency: 16 microseconds 00:07:47.719 Exit Latency: 4 microseconds 00:07:47.719 Relative Read Throughput: 0 00:07:47.719 Relative Read Latency: 0 00:07:47.719 Relative Write Throughput: 0 00:07:47.719 Relative Write Latency: 0 00:07:47.719 Idle Power: Not Reported 00:07:47.719 Active Power: Not Reported 00:07:47.719 Non-Operational Permissive Mode: Not Supported 00:07:47.719 00:07:47.719 Health Information 00:07:47.719 ================== 00:07:47.719 Critical Warnings: 00:07:47.719 Available Spare Space: OK 00:07:47.719 Temperature: OK 00:07:47.719 Device Reliability: OK 00:07:47.719 Read Only: No 00:07:47.719 Volatile Memory Backup: OK 00:07:47.719 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.719 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.719 Available Spare: 0% 00:07:47.719 Available Spare Threshold: 0% 00:07:47.719 Life Percentage Used: 0% 00:07:47.719 Data Units Read: 654 00:07:47.719 Data Units Written: 582 00:07:47.719 Host Read Commands: 33808 00:07:47.719 Host Write Commands: 33594 00:07:47.719 Controller Busy Time: 0 minutes 00:07:47.719 Power Cycles: 0 00:07:47.719 Power On Hours: 0 hours 00:07:47.719 Unsafe Shutdowns: 0 00:07:47.719 Unrecoverable Media Errors: 0 00:07:47.719 Lifetime Error Log Entries: 0 00:07:47.719 Warning Temperature Time: 0 minutes 00:07:47.719 Critical Temperature Time: 0 minutes 00:07:47.719 00:07:47.719 Number of Queues 00:07:47.719 ================ 00:07:47.719 Number of I/O Submission Queues: 64 00:07:47.719 Number of I/O Completion Queues: 64 00:07:47.719 00:07:47.719 ZNS Specific Controller Data 00:07:47.719 ============================ 00:07:47.719 Zone Append Size Limit: 0 00:07:47.719 00:07:47.719 00:07:47.719 Active Namespaces 00:07:47.719 ================= 00:07:47.719 Namespace ID:1 00:07:47.719 Error Recovery Timeout: Unlimited 00:07:47.719 Command Set Identifier: NVM (00h) 00:07:47.719 Deallocate: Supported 00:07:47.719 Deallocated/Unwritten Error: Supported 00:07:47.719 Deallocated Read Value: All 0x00 00:07:47.719 Deallocate in Write Zeroes: Not Supported 00:07:47.719 Deallocated Guard Field: 0xFFFF 00:07:47.719 Flush: Supported 00:07:47.719 Reservation: Not Supported 00:07:47.719 Metadata Transferred as: Separate Metadata Buffer 00:07:47.719 Namespace Sharing Capabilities: Private 00:07:47.719 Size (in LBAs): 1548666 (5GiB) 00:07:47.719 Capacity (in LBAs): 1548666 (5GiB) 00:07:47.719 Utilization (in LBAs): 1548666 (5GiB) 00:07:47.719 Thin Provisioning: Not Supported 00:07:47.719 Per-NS Atomic Units: No 00:07:47.719 Maximum Single Source Range Length: 128 00:07:47.719 Maximum Copy Length: 128 00:07:47.719 Maximum Source Range Count: 128 00:07:47.719 NGUID/EUI64 Never Reused: No 00:07:47.719 Namespace Write Protected: No 00:07:47.719 Number of LBA Formats: 8 00:07:47.719 Current LBA Format: LBA Format #07 00:07:47.719 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.719 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.719 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.719 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.719 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.719 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.719 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.719 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.719 00:07:47.719 NVM Specific Namespace Data 00:07:47.719 =========================== 00:07:47.720 Logical Block Storage Tag Mask: 0 00:07:47.720 Protection Information Capabilities: 00:07:47.720 16b Guard Protection Information Storage Tag Support: No 00:07:47.720 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.720 Storage Tag Check Read Support: No 00:07:47.720 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.720 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.720 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.720 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.720 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.720 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.720 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.720 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.720 13:42:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:47.720 13:42:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:47.982 ===================================================== 00:07:47.982 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:47.982 ===================================================== 00:07:47.982 Controller Capabilities/Features 00:07:47.982 ================================ 00:07:47.982 Vendor ID: 1b36 00:07:47.982 Subsystem Vendor ID: 1af4 00:07:47.982 Serial Number: 12341 00:07:47.982 Model Number: QEMU NVMe Ctrl 00:07:47.982 Firmware Version: 8.0.0 00:07:47.982 Recommended Arb Burst: 6 00:07:47.982 IEEE OUI Identifier: 00 54 52 00:07:47.982 Multi-path I/O 00:07:47.982 May have multiple subsystem ports: No 00:07:47.982 May have multiple controllers: No 00:07:47.982 Associated with SR-IOV VF: No 00:07:47.982 Max Data Transfer Size: 524288 00:07:47.982 Max Number of Namespaces: 256 00:07:47.982 Max Number of I/O Queues: 64 00:07:47.982 NVMe Specification Version (VS): 1.4 00:07:47.982 NVMe Specification Version (Identify): 1.4 00:07:47.982 Maximum Queue Entries: 2048 00:07:47.982 Contiguous Queues Required: Yes 00:07:47.982 Arbitration Mechanisms Supported 00:07:47.982 Weighted Round Robin: Not Supported 00:07:47.982 Vendor Specific: Not Supported 00:07:47.982 Reset Timeout: 7500 ms 00:07:47.982 Doorbell Stride: 4 bytes 00:07:47.982 NVM Subsystem Reset: Not Supported 00:07:47.982 Command Sets Supported 00:07:47.982 NVM Command Set: Supported 00:07:47.982 Boot Partition: Not Supported 00:07:47.982 Memory Page Size Minimum: 4096 bytes 00:07:47.982 Memory Page Size Maximum: 65536 bytes 00:07:47.982 Persistent Memory Region: Not Supported 00:07:47.982 Optional Asynchronous Events Supported 00:07:47.982 Namespace Attribute Notices: Supported 00:07:47.982 Firmware Activation Notices: Not Supported 00:07:47.982 ANA Change Notices: Not Supported 00:07:47.982 PLE Aggregate Log Change Notices: Not Supported 00:07:47.982 LBA Status Info Alert Notices: Not Supported 00:07:47.982 EGE Aggregate Log Change Notices: Not Supported 00:07:47.982 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.982 Zone Descriptor Change Notices: Not Supported 00:07:47.982 Discovery Log Change Notices: Not Supported 00:07:47.982 Controller Attributes 00:07:47.982 128-bit Host Identifier: Not Supported 00:07:47.982 Non-Operational Permissive Mode: Not Supported 00:07:47.982 NVM Sets: Not Supported 00:07:47.982 Read Recovery Levels: Not Supported 00:07:47.982 Endurance Groups: Not Supported 00:07:47.982 Predictable Latency Mode: Not Supported 00:07:47.982 Traffic Based Keep ALive: Not Supported 00:07:47.982 Namespace Granularity: Not Supported 00:07:47.982 SQ Associations: Not Supported 00:07:47.982 UUID List: Not Supported 00:07:47.982 Multi-Domain Subsystem: Not Supported 00:07:47.982 Fixed Capacity Management: Not Supported 00:07:47.982 Variable Capacity Management: Not Supported 00:07:47.982 Delete Endurance Group: Not Supported 00:07:47.982 Delete NVM Set: Not Supported 00:07:47.982 Extended LBA Formats Supported: Supported 00:07:47.982 Flexible Data Placement Supported: Not Supported 00:07:47.982 00:07:47.982 Controller Memory Buffer Support 00:07:47.982 ================================ 00:07:47.982 Supported: No 00:07:47.982 00:07:47.982 Persistent Memory Region Support 00:07:47.982 ================================ 00:07:47.982 Supported: No 00:07:47.982 00:07:47.982 Admin Command Set Attributes 00:07:47.982 ============================ 00:07:47.982 Security Send/Receive: Not Supported 00:07:47.982 Format NVM: Supported 00:07:47.982 Firmware Activate/Download: Not Supported 00:07:47.982 Namespace Management: Supported 00:07:47.982 Device Self-Test: Not Supported 00:07:47.982 Directives: Supported 00:07:47.982 NVMe-MI: Not Supported 00:07:47.982 Virtualization Management: Not Supported 00:07:47.982 Doorbell Buffer Config: Supported 00:07:47.982 Get LBA Status Capability: Not Supported 00:07:47.982 Command & Feature Lockdown Capability: Not Supported 00:07:47.982 Abort Command Limit: 4 00:07:47.982 Async Event Request Limit: 4 00:07:47.982 Number of Firmware Slots: N/A 00:07:47.982 Firmware Slot 1 Read-Only: N/A 00:07:47.982 Firmware Activation Without Reset: N/A 00:07:47.982 Multiple Update Detection Support: N/A 00:07:47.982 Firmware Update Granularity: No Information Provided 00:07:47.983 Per-Namespace SMART Log: Yes 00:07:47.983 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.983 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:47.983 Command Effects Log Page: Supported 00:07:47.983 Get Log Page Extended Data: Supported 00:07:47.983 Telemetry Log Pages: Not Supported 00:07:47.983 Persistent Event Log Pages: Not Supported 00:07:47.983 Supported Log Pages Log Page: May Support 00:07:47.983 Commands Supported & Effects Log Page: Not Supported 00:07:47.983 Feature Identifiers & Effects Log Page:May Support 00:07:47.983 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.983 Data Area 4 for Telemetry Log: Not Supported 00:07:47.983 Error Log Page Entries Supported: 1 00:07:47.983 Keep Alive: Not Supported 00:07:47.983 00:07:47.983 NVM Command Set Attributes 00:07:47.983 ========================== 00:07:47.983 Submission Queue Entry Size 00:07:47.983 Max: 64 00:07:47.983 Min: 64 00:07:47.983 Completion Queue Entry Size 00:07:47.983 Max: 16 00:07:47.983 Min: 16 00:07:47.983 Number of Namespaces: 256 00:07:47.983 Compare Command: Supported 00:07:47.983 Write Uncorrectable Command: Not Supported 00:07:47.983 Dataset Management Command: Supported 00:07:47.983 Write Zeroes Command: Supported 00:07:47.983 Set Features Save Field: Supported 00:07:47.983 Reservations: Not Supported 00:07:47.983 Timestamp: Supported 00:07:47.983 Copy: Supported 00:07:47.983 Volatile Write Cache: Present 00:07:47.983 Atomic Write Unit (Normal): 1 00:07:47.983 Atomic Write Unit (PFail): 1 00:07:47.983 Atomic Compare & Write Unit: 1 00:07:47.983 Fused Compare & Write: Not Supported 00:07:47.983 Scatter-Gather List 00:07:47.983 SGL Command Set: Supported 00:07:47.983 SGL Keyed: Not Supported 00:07:47.983 SGL Bit Bucket Descriptor: Not Supported 00:07:47.983 SGL Metadata Pointer: Not Supported 00:07:47.983 Oversized SGL: Not Supported 00:07:47.983 SGL Metadata Address: Not Supported 00:07:47.983 SGL Offset: Not Supported 00:07:47.983 Transport SGL Data Block: Not Supported 00:07:47.983 Replay Protected Memory Block: Not Supported 00:07:47.983 00:07:47.983 Firmware Slot Information 00:07:47.983 ========================= 00:07:47.983 Active slot: 1 00:07:47.983 Slot 1 Firmware Revision: 1.0 00:07:47.983 00:07:47.983 00:07:47.983 Commands Supported and Effects 00:07:47.983 ============================== 00:07:47.983 Admin Commands 00:07:47.983 -------------- 00:07:47.983 Delete I/O Submission Queue (00h): Supported 00:07:47.983 Create I/O Submission Queue (01h): Supported 00:07:47.983 Get Log Page (02h): Supported 00:07:47.983 Delete I/O Completion Queue (04h): Supported 00:07:47.983 Create I/O Completion Queue (05h): Supported 00:07:47.983 Identify (06h): Supported 00:07:47.983 Abort (08h): Supported 00:07:47.983 Set Features (09h): Supported 00:07:47.983 Get Features (0Ah): Supported 00:07:47.983 Asynchronous Event Request (0Ch): Supported 00:07:47.983 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.983 Directive Send (19h): Supported 00:07:47.983 Directive Receive (1Ah): Supported 00:07:47.983 Virtualization Management (1Ch): Supported 00:07:47.983 Doorbell Buffer Config (7Ch): Supported 00:07:47.983 Format NVM (80h): Supported LBA-Change 00:07:47.983 I/O Commands 00:07:47.983 ------------ 00:07:47.983 Flush (00h): Supported LBA-Change 00:07:47.983 Write (01h): Supported LBA-Change 00:07:47.983 Read (02h): Supported 00:07:47.983 Compare (05h): Supported 00:07:47.983 Write Zeroes (08h): Supported LBA-Change 00:07:47.983 Dataset Management (09h): Supported LBA-Change 00:07:47.983 Unknown (0Ch): Supported 00:07:47.983 Unknown (12h): Supported 00:07:47.983 Copy (19h): Supported LBA-Change 00:07:47.983 Unknown (1Dh): Supported LBA-Change 00:07:47.983 00:07:47.983 Error Log 00:07:47.983 ========= 00:07:47.983 00:07:47.983 Arbitration 00:07:47.983 =========== 00:07:47.983 Arbitration Burst: no limit 00:07:47.983 00:07:47.983 Power Management 00:07:47.983 ================ 00:07:47.983 Number of Power States: 1 00:07:47.983 Current Power State: Power State #0 00:07:47.983 Power State #0: 00:07:47.983 Max Power: 25.00 W 00:07:47.983 Non-Operational State: Operational 00:07:47.983 Entry Latency: 16 microseconds 00:07:47.983 Exit Latency: 4 microseconds 00:07:47.983 Relative Read Throughput: 0 00:07:47.983 Relative Read Latency: 0 00:07:47.983 Relative Write Throughput: 0 00:07:47.983 Relative Write Latency: 0 00:07:47.983 Idle Power: Not Reported 00:07:47.983 Active Power: Not Reported 00:07:47.983 Non-Operational Permissive Mode: Not Supported 00:07:47.983 00:07:47.983 Health Information 00:07:47.983 ================== 00:07:47.983 Critical Warnings: 00:07:47.983 Available Spare Space: OK 00:07:47.983 Temperature: OK 00:07:47.983 Device Reliability: OK 00:07:47.983 Read Only: No 00:07:47.983 Volatile Memory Backup: OK 00:07:47.983 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.983 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.983 Available Spare: 0% 00:07:47.983 Available Spare Threshold: 0% 00:07:47.983 Life Percentage Used: 0% 00:07:47.983 Data Units Read: 999 00:07:47.983 Data Units Written: 866 00:07:47.983 Host Read Commands: 48895 00:07:47.983 Host Write Commands: 47693 00:07:47.983 Controller Busy Time: 0 minutes 00:07:47.983 Power Cycles: 0 00:07:47.983 Power On Hours: 0 hours 00:07:47.983 Unsafe Shutdowns: 0 00:07:47.983 Unrecoverable Media Errors: 0 00:07:47.983 Lifetime Error Log Entries: 0 00:07:47.983 Warning Temperature Time: 0 minutes 00:07:47.983 Critical Temperature Time: 0 minutes 00:07:47.983 00:07:47.983 Number of Queues 00:07:47.983 ================ 00:07:47.983 Number of I/O Submission Queues: 64 00:07:47.983 Number of I/O Completion Queues: 64 00:07:47.983 00:07:47.983 ZNS Specific Controller Data 00:07:47.983 ============================ 00:07:47.983 Zone Append Size Limit: 0 00:07:47.983 00:07:47.983 00:07:47.983 Active Namespaces 00:07:47.983 ================= 00:07:47.983 Namespace ID:1 00:07:47.983 Error Recovery Timeout: Unlimited 00:07:47.983 Command Set Identifier: NVM (00h) 00:07:47.983 Deallocate: Supported 00:07:47.983 Deallocated/Unwritten Error: Supported 00:07:47.983 Deallocated Read Value: All 0x00 00:07:47.983 Deallocate in Write Zeroes: Not Supported 00:07:47.983 Deallocated Guard Field: 0xFFFF 00:07:47.983 Flush: Supported 00:07:47.983 Reservation: Not Supported 00:07:47.983 Namespace Sharing Capabilities: Private 00:07:47.983 Size (in LBAs): 1310720 (5GiB) 00:07:47.983 Capacity (in LBAs): 1310720 (5GiB) 00:07:47.983 Utilization (in LBAs): 1310720 (5GiB) 00:07:47.983 Thin Provisioning: Not Supported 00:07:47.983 Per-NS Atomic Units: No 00:07:47.983 Maximum Single Source Range Length: 128 00:07:47.983 Maximum Copy Length: 128 00:07:47.983 Maximum Source Range Count: 128 00:07:47.983 NGUID/EUI64 Never Reused: No 00:07:47.983 Namespace Write Protected: No 00:07:47.983 Number of LBA Formats: 8 00:07:47.983 Current LBA Format: LBA Format #04 00:07:47.983 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.983 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.983 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.983 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.983 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.984 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.984 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.984 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.984 00:07:47.984 NVM Specific Namespace Data 00:07:47.984 =========================== 00:07:47.984 Logical Block Storage Tag Mask: 0 00:07:47.984 Protection Information Capabilities: 00:07:47.984 16b Guard Protection Information Storage Tag Support: No 00:07:47.984 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.984 Storage Tag Check Read Support: No 00:07:47.984 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.984 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.984 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.984 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.984 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.984 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.984 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.984 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.984 13:42:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:47.984 13:42:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:48.246 ===================================================== 00:07:48.246 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:48.246 ===================================================== 00:07:48.246 Controller Capabilities/Features 00:07:48.246 ================================ 00:07:48.246 Vendor ID: 1b36 00:07:48.246 Subsystem Vendor ID: 1af4 00:07:48.246 Serial Number: 12342 00:07:48.246 Model Number: QEMU NVMe Ctrl 00:07:48.246 Firmware Version: 8.0.0 00:07:48.246 Recommended Arb Burst: 6 00:07:48.246 IEEE OUI Identifier: 00 54 52 00:07:48.246 Multi-path I/O 00:07:48.246 May have multiple subsystem ports: No 00:07:48.246 May have multiple controllers: No 00:07:48.246 Associated with SR-IOV VF: No 00:07:48.246 Max Data Transfer Size: 524288 00:07:48.246 Max Number of Namespaces: 256 00:07:48.246 Max Number of I/O Queues: 64 00:07:48.246 NVMe Specification Version (VS): 1.4 00:07:48.246 NVMe Specification Version (Identify): 1.4 00:07:48.246 Maximum Queue Entries: 2048 00:07:48.246 Contiguous Queues Required: Yes 00:07:48.246 Arbitration Mechanisms Supported 00:07:48.246 Weighted Round Robin: Not Supported 00:07:48.246 Vendor Specific: Not Supported 00:07:48.246 Reset Timeout: 7500 ms 00:07:48.246 Doorbell Stride: 4 bytes 00:07:48.246 NVM Subsystem Reset: Not Supported 00:07:48.246 Command Sets Supported 00:07:48.246 NVM Command Set: Supported 00:07:48.246 Boot Partition: Not Supported 00:07:48.246 Memory Page Size Minimum: 4096 bytes 00:07:48.246 Memory Page Size Maximum: 65536 bytes 00:07:48.246 Persistent Memory Region: Not Supported 00:07:48.246 Optional Asynchronous Events Supported 00:07:48.246 Namespace Attribute Notices: Supported 00:07:48.246 Firmware Activation Notices: Not Supported 00:07:48.246 ANA Change Notices: Not Supported 00:07:48.246 PLE Aggregate Log Change Notices: Not Supported 00:07:48.246 LBA Status Info Alert Notices: Not Supported 00:07:48.246 EGE Aggregate Log Change Notices: Not Supported 00:07:48.246 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.246 Zone Descriptor Change Notices: Not Supported 00:07:48.246 Discovery Log Change Notices: Not Supported 00:07:48.246 Controller Attributes 00:07:48.246 128-bit Host Identifier: Not Supported 00:07:48.246 Non-Operational Permissive Mode: Not Supported 00:07:48.246 NVM Sets: Not Supported 00:07:48.246 Read Recovery Levels: Not Supported 00:07:48.246 Endurance Groups: Not Supported 00:07:48.246 Predictable Latency Mode: Not Supported 00:07:48.246 Traffic Based Keep ALive: Not Supported 00:07:48.246 Namespace Granularity: Not Supported 00:07:48.246 SQ Associations: Not Supported 00:07:48.246 UUID List: Not Supported 00:07:48.246 Multi-Domain Subsystem: Not Supported 00:07:48.246 Fixed Capacity Management: Not Supported 00:07:48.246 Variable Capacity Management: Not Supported 00:07:48.246 Delete Endurance Group: Not Supported 00:07:48.246 Delete NVM Set: Not Supported 00:07:48.246 Extended LBA Formats Supported: Supported 00:07:48.246 Flexible Data Placement Supported: Not Supported 00:07:48.246 00:07:48.246 Controller Memory Buffer Support 00:07:48.246 ================================ 00:07:48.246 Supported: No 00:07:48.246 00:07:48.246 Persistent Memory Region Support 00:07:48.246 ================================ 00:07:48.246 Supported: No 00:07:48.246 00:07:48.246 Admin Command Set Attributes 00:07:48.246 ============================ 00:07:48.246 Security Send/Receive: Not Supported 00:07:48.246 Format NVM: Supported 00:07:48.246 Firmware Activate/Download: Not Supported 00:07:48.246 Namespace Management: Supported 00:07:48.246 Device Self-Test: Not Supported 00:07:48.246 Directives: Supported 00:07:48.246 NVMe-MI: Not Supported 00:07:48.246 Virtualization Management: Not Supported 00:07:48.246 Doorbell Buffer Config: Supported 00:07:48.246 Get LBA Status Capability: Not Supported 00:07:48.246 Command & Feature Lockdown Capability: Not Supported 00:07:48.246 Abort Command Limit: 4 00:07:48.246 Async Event Request Limit: 4 00:07:48.246 Number of Firmware Slots: N/A 00:07:48.246 Firmware Slot 1 Read-Only: N/A 00:07:48.246 Firmware Activation Without Reset: N/A 00:07:48.246 Multiple Update Detection Support: N/A 00:07:48.247 Firmware Update Granularity: No Information Provided 00:07:48.247 Per-Namespace SMART Log: Yes 00:07:48.247 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.247 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:48.247 Command Effects Log Page: Supported 00:07:48.247 Get Log Page Extended Data: Supported 00:07:48.247 Telemetry Log Pages: Not Supported 00:07:48.247 Persistent Event Log Pages: Not Supported 00:07:48.247 Supported Log Pages Log Page: May Support 00:07:48.247 Commands Supported & Effects Log Page: Not Supported 00:07:48.247 Feature Identifiers & Effects Log Page:May Support 00:07:48.247 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.247 Data Area 4 for Telemetry Log: Not Supported 00:07:48.247 Error Log Page Entries Supported: 1 00:07:48.247 Keep Alive: Not Supported 00:07:48.247 00:07:48.247 NVM Command Set Attributes 00:07:48.247 ========================== 00:07:48.247 Submission Queue Entry Size 00:07:48.247 Max: 64 00:07:48.247 Min: 64 00:07:48.247 Completion Queue Entry Size 00:07:48.247 Max: 16 00:07:48.247 Min: 16 00:07:48.247 Number of Namespaces: 256 00:07:48.247 Compare Command: Supported 00:07:48.247 Write Uncorrectable Command: Not Supported 00:07:48.247 Dataset Management Command: Supported 00:07:48.247 Write Zeroes Command: Supported 00:07:48.247 Set Features Save Field: Supported 00:07:48.247 Reservations: Not Supported 00:07:48.247 Timestamp: Supported 00:07:48.247 Copy: Supported 00:07:48.247 Volatile Write Cache: Present 00:07:48.247 Atomic Write Unit (Normal): 1 00:07:48.247 Atomic Write Unit (PFail): 1 00:07:48.247 Atomic Compare & Write Unit: 1 00:07:48.247 Fused Compare & Write: Not Supported 00:07:48.247 Scatter-Gather List 00:07:48.247 SGL Command Set: Supported 00:07:48.247 SGL Keyed: Not Supported 00:07:48.247 SGL Bit Bucket Descriptor: Not Supported 00:07:48.247 SGL Metadata Pointer: Not Supported 00:07:48.247 Oversized SGL: Not Supported 00:07:48.247 SGL Metadata Address: Not Supported 00:07:48.247 SGL Offset: Not Supported 00:07:48.247 Transport SGL Data Block: Not Supported 00:07:48.247 Replay Protected Memory Block: Not Supported 00:07:48.247 00:07:48.247 Firmware Slot Information 00:07:48.247 ========================= 00:07:48.247 Active slot: 1 00:07:48.247 Slot 1 Firmware Revision: 1.0 00:07:48.247 00:07:48.247 00:07:48.247 Commands Supported and Effects 00:07:48.247 ============================== 00:07:48.247 Admin Commands 00:07:48.247 -------------- 00:07:48.247 Delete I/O Submission Queue (00h): Supported 00:07:48.247 Create I/O Submission Queue (01h): Supported 00:07:48.247 Get Log Page (02h): Supported 00:07:48.247 Delete I/O Completion Queue (04h): Supported 00:07:48.247 Create I/O Completion Queue (05h): Supported 00:07:48.247 Identify (06h): Supported 00:07:48.247 Abort (08h): Supported 00:07:48.247 Set Features (09h): Supported 00:07:48.247 Get Features (0Ah): Supported 00:07:48.247 Asynchronous Event Request (0Ch): Supported 00:07:48.247 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.247 Directive Send (19h): Supported 00:07:48.247 Directive Receive (1Ah): Supported 00:07:48.247 Virtualization Management (1Ch): Supported 00:07:48.247 Doorbell Buffer Config (7Ch): Supported 00:07:48.247 Format NVM (80h): Supported LBA-Change 00:07:48.247 I/O Commands 00:07:48.247 ------------ 00:07:48.247 Flush (00h): Supported LBA-Change 00:07:48.247 Write (01h): Supported LBA-Change 00:07:48.247 Read (02h): Supported 00:07:48.247 Compare (05h): Supported 00:07:48.247 Write Zeroes (08h): Supported LBA-Change 00:07:48.247 Dataset Management (09h): Supported LBA-Change 00:07:48.247 Unknown (0Ch): Supported 00:07:48.247 Unknown (12h): Supported 00:07:48.247 Copy (19h): Supported LBA-Change 00:07:48.247 Unknown (1Dh): Supported LBA-Change 00:07:48.247 00:07:48.247 Error Log 00:07:48.247 ========= 00:07:48.247 00:07:48.247 Arbitration 00:07:48.247 =========== 00:07:48.247 Arbitration Burst: no limit 00:07:48.247 00:07:48.247 Power Management 00:07:48.247 ================ 00:07:48.247 Number of Power States: 1 00:07:48.247 Current Power State: Power State #0 00:07:48.247 Power State #0: 00:07:48.247 Max Power: 25.00 W 00:07:48.247 Non-Operational State: Operational 00:07:48.247 Entry Latency: 16 microseconds 00:07:48.247 Exit Latency: 4 microseconds 00:07:48.247 Relative Read Throughput: 0 00:07:48.247 Relative Read Latency: 0 00:07:48.247 Relative Write Throughput: 0 00:07:48.247 Relative Write Latency: 0 00:07:48.247 Idle Power: Not Reported 00:07:48.247 Active Power: Not Reported 00:07:48.247 Non-Operational Permissive Mode: Not Supported 00:07:48.247 00:07:48.247 Health Information 00:07:48.247 ================== 00:07:48.247 Critical Warnings: 00:07:48.247 Available Spare Space: OK 00:07:48.247 Temperature: OK 00:07:48.247 Device Reliability: OK 00:07:48.247 Read Only: No 00:07:48.247 Volatile Memory Backup: OK 00:07:48.247 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.247 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.247 Available Spare: 0% 00:07:48.247 Available Spare Threshold: 0% 00:07:48.247 Life Percentage Used: 0% 00:07:48.247 Data Units Read: 2182 00:07:48.247 Data Units Written: 1969 00:07:48.247 Host Read Commands: 104289 00:07:48.247 Host Write Commands: 102558 00:07:48.247 Controller Busy Time: 0 minutes 00:07:48.247 Power Cycles: 0 00:07:48.247 Power On Hours: 0 hours 00:07:48.247 Unsafe Shutdowns: 0 00:07:48.247 Unrecoverable Media Errors: 0 00:07:48.247 Lifetime Error Log Entries: 0 00:07:48.247 Warning Temperature Time: 0 minutes 00:07:48.247 Critical Temperature Time: 0 minutes 00:07:48.247 00:07:48.247 Number of Queues 00:07:48.247 ================ 00:07:48.247 Number of I/O Submission Queues: 64 00:07:48.247 Number of I/O Completion Queues: 64 00:07:48.247 00:07:48.247 ZNS Specific Controller Data 00:07:48.247 ============================ 00:07:48.247 Zone Append Size Limit: 0 00:07:48.247 00:07:48.247 00:07:48.247 Active Namespaces 00:07:48.247 ================= 00:07:48.247 Namespace ID:1 00:07:48.247 Error Recovery Timeout: Unlimited 00:07:48.247 Command Set Identifier: NVM (00h) 00:07:48.247 Deallocate: Supported 00:07:48.247 Deallocated/Unwritten Error: Supported 00:07:48.247 Deallocated Read Value: All 0x00 00:07:48.247 Deallocate in Write Zeroes: Not Supported 00:07:48.247 Deallocated Guard Field: 0xFFFF 00:07:48.247 Flush: Supported 00:07:48.247 Reservation: Not Supported 00:07:48.247 Namespace Sharing Capabilities: Private 00:07:48.247 Size (in LBAs): 1048576 (4GiB) 00:07:48.247 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.247 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.247 Thin Provisioning: Not Supported 00:07:48.247 Per-NS Atomic Units: No 00:07:48.247 Maximum Single Source Range Length: 128 00:07:48.247 Maximum Copy Length: 128 00:07:48.247 Maximum Source Range Count: 128 00:07:48.247 NGUID/EUI64 Never Reused: No 00:07:48.247 Namespace Write Protected: No 00:07:48.247 Number of LBA Formats: 8 00:07:48.247 Current LBA Format: LBA Format #04 00:07:48.247 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.247 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.247 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.247 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.247 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.247 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.247 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.247 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.247 00:07:48.247 NVM Specific Namespace Data 00:07:48.247 =========================== 00:07:48.247 Logical Block Storage Tag Mask: 0 00:07:48.247 Protection Information Capabilities: 00:07:48.247 16b Guard Protection Information Storage Tag Support: No 00:07:48.247 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.247 Storage Tag Check Read Support: No 00:07:48.247 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.247 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.247 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.247 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.247 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Namespace ID:2 00:07:48.248 Error Recovery Timeout: Unlimited 00:07:48.248 Command Set Identifier: NVM (00h) 00:07:48.248 Deallocate: Supported 00:07:48.248 Deallocated/Unwritten Error: Supported 00:07:48.248 Deallocated Read Value: All 0x00 00:07:48.248 Deallocate in Write Zeroes: Not Supported 00:07:48.248 Deallocated Guard Field: 0xFFFF 00:07:48.248 Flush: Supported 00:07:48.248 Reservation: Not Supported 00:07:48.248 Namespace Sharing Capabilities: Private 00:07:48.248 Size (in LBAs): 1048576 (4GiB) 00:07:48.248 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.248 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.248 Thin Provisioning: Not Supported 00:07:48.248 Per-NS Atomic Units: No 00:07:48.248 Maximum Single Source Range Length: 128 00:07:48.248 Maximum Copy Length: 128 00:07:48.248 Maximum Source Range Count: 128 00:07:48.248 NGUID/EUI64 Never Reused: No 00:07:48.248 Namespace Write Protected: No 00:07:48.248 Number of LBA Formats: 8 00:07:48.248 Current LBA Format: LBA Format #04 00:07:48.248 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.248 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.248 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.248 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.248 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.248 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.248 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.248 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.248 00:07:48.248 NVM Specific Namespace Data 00:07:48.248 =========================== 00:07:48.248 Logical Block Storage Tag Mask: 0 00:07:48.248 Protection Information Capabilities: 00:07:48.248 16b Guard Protection Information Storage Tag Support: No 00:07:48.248 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.248 Storage Tag Check Read Support: No 00:07:48.248 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Namespace ID:3 00:07:48.248 Error Recovery Timeout: Unlimited 00:07:48.248 Command Set Identifier: NVM (00h) 00:07:48.248 Deallocate: Supported 00:07:48.248 Deallocated/Unwritten Error: Supported 00:07:48.248 Deallocated Read Value: All 0x00 00:07:48.248 Deallocate in Write Zeroes: Not Supported 00:07:48.248 Deallocated Guard Field: 0xFFFF 00:07:48.248 Flush: Supported 00:07:48.248 Reservation: Not Supported 00:07:48.248 Namespace Sharing Capabilities: Private 00:07:48.248 Size (in LBAs): 1048576 (4GiB) 00:07:48.248 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.248 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.248 Thin Provisioning: Not Supported 00:07:48.248 Per-NS Atomic Units: No 00:07:48.248 Maximum Single Source Range Length: 128 00:07:48.248 Maximum Copy Length: 128 00:07:48.248 Maximum Source Range Count: 128 00:07:48.248 NGUID/EUI64 Never Reused: No 00:07:48.248 Namespace Write Protected: No 00:07:48.248 Number of LBA Formats: 8 00:07:48.248 Current LBA Format: LBA Format #04 00:07:48.248 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.248 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.248 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.248 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.248 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.248 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.248 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.248 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.248 00:07:48.248 NVM Specific Namespace Data 00:07:48.248 =========================== 00:07:48.248 Logical Block Storage Tag Mask: 0 00:07:48.248 Protection Information Capabilities: 00:07:48.248 16b Guard Protection Information Storage Tag Support: No 00:07:48.248 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.248 Storage Tag Check Read Support: No 00:07:48.248 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.248 13:42:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:48.248 13:42:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:48.510 ===================================================== 00:07:48.510 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:48.510 ===================================================== 00:07:48.510 Controller Capabilities/Features 00:07:48.510 ================================ 00:07:48.510 Vendor ID: 1b36 00:07:48.510 Subsystem Vendor ID: 1af4 00:07:48.510 Serial Number: 12343 00:07:48.510 Model Number: QEMU NVMe Ctrl 00:07:48.510 Firmware Version: 8.0.0 00:07:48.510 Recommended Arb Burst: 6 00:07:48.510 IEEE OUI Identifier: 00 54 52 00:07:48.510 Multi-path I/O 00:07:48.510 May have multiple subsystem ports: No 00:07:48.510 May have multiple controllers: Yes 00:07:48.510 Associated with SR-IOV VF: No 00:07:48.510 Max Data Transfer Size: 524288 00:07:48.510 Max Number of Namespaces: 256 00:07:48.510 Max Number of I/O Queues: 64 00:07:48.510 NVMe Specification Version (VS): 1.4 00:07:48.510 NVMe Specification Version (Identify): 1.4 00:07:48.510 Maximum Queue Entries: 2048 00:07:48.510 Contiguous Queues Required: Yes 00:07:48.510 Arbitration Mechanisms Supported 00:07:48.510 Weighted Round Robin: Not Supported 00:07:48.510 Vendor Specific: Not Supported 00:07:48.510 Reset Timeout: 7500 ms 00:07:48.510 Doorbell Stride: 4 bytes 00:07:48.510 NVM Subsystem Reset: Not Supported 00:07:48.510 Command Sets Supported 00:07:48.510 NVM Command Set: Supported 00:07:48.510 Boot Partition: Not Supported 00:07:48.510 Memory Page Size Minimum: 4096 bytes 00:07:48.510 Memory Page Size Maximum: 65536 bytes 00:07:48.510 Persistent Memory Region: Not Supported 00:07:48.510 Optional Asynchronous Events Supported 00:07:48.510 Namespace Attribute Notices: Supported 00:07:48.510 Firmware Activation Notices: Not Supported 00:07:48.510 ANA Change Notices: Not Supported 00:07:48.510 PLE Aggregate Log Change Notices: Not Supported 00:07:48.510 LBA Status Info Alert Notices: Not Supported 00:07:48.510 EGE Aggregate Log Change Notices: Not Supported 00:07:48.510 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.510 Zone Descriptor Change Notices: Not Supported 00:07:48.510 Discovery Log Change Notices: Not Supported 00:07:48.510 Controller Attributes 00:07:48.510 128-bit Host Identifier: Not Supported 00:07:48.510 Non-Operational Permissive Mode: Not Supported 00:07:48.510 NVM Sets: Not Supported 00:07:48.510 Read Recovery Levels: Not Supported 00:07:48.510 Endurance Groups: Supported 00:07:48.510 Predictable Latency Mode: Not Supported 00:07:48.510 Traffic Based Keep ALive: Not Supported 00:07:48.510 Namespace Granularity: Not Supported 00:07:48.510 SQ Associations: Not Supported 00:07:48.510 UUID List: Not Supported 00:07:48.510 Multi-Domain Subsystem: Not Supported 00:07:48.510 Fixed Capacity Management: Not Supported 00:07:48.510 Variable Capacity Management: Not Supported 00:07:48.510 Delete Endurance Group: Not Supported 00:07:48.510 Delete NVM Set: Not Supported 00:07:48.510 Extended LBA Formats Supported: Supported 00:07:48.510 Flexible Data Placement Supported: Supported 00:07:48.510 00:07:48.510 Controller Memory Buffer Support 00:07:48.510 ================================ 00:07:48.510 Supported: No 00:07:48.510 00:07:48.510 Persistent Memory Region Support 00:07:48.510 ================================ 00:07:48.510 Supported: No 00:07:48.510 00:07:48.510 Admin Command Set Attributes 00:07:48.510 ============================ 00:07:48.510 Security Send/Receive: Not Supported 00:07:48.510 Format NVM: Supported 00:07:48.510 Firmware Activate/Download: Not Supported 00:07:48.510 Namespace Management: Supported 00:07:48.510 Device Self-Test: Not Supported 00:07:48.510 Directives: Supported 00:07:48.510 NVMe-MI: Not Supported 00:07:48.510 Virtualization Management: Not Supported 00:07:48.510 Doorbell Buffer Config: Supported 00:07:48.510 Get LBA Status Capability: Not Supported 00:07:48.510 Command & Feature Lockdown Capability: Not Supported 00:07:48.510 Abort Command Limit: 4 00:07:48.510 Async Event Request Limit: 4 00:07:48.510 Number of Firmware Slots: N/A 00:07:48.510 Firmware Slot 1 Read-Only: N/A 00:07:48.510 Firmware Activation Without Reset: N/A 00:07:48.510 Multiple Update Detection Support: N/A 00:07:48.510 Firmware Update Granularity: No Information Provided 00:07:48.510 Per-Namespace SMART Log: Yes 00:07:48.510 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.510 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:48.510 Command Effects Log Page: Supported 00:07:48.510 Get Log Page Extended Data: Supported 00:07:48.510 Telemetry Log Pages: Not Supported 00:07:48.510 Persistent Event Log Pages: Not Supported 00:07:48.510 Supported Log Pages Log Page: May Support 00:07:48.510 Commands Supported & Effects Log Page: Not Supported 00:07:48.510 Feature Identifiers & Effects Log Page:May Support 00:07:48.510 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.510 Data Area 4 for Telemetry Log: Not Supported 00:07:48.510 Error Log Page Entries Supported: 1 00:07:48.510 Keep Alive: Not Supported 00:07:48.510 00:07:48.510 NVM Command Set Attributes 00:07:48.510 ========================== 00:07:48.510 Submission Queue Entry Size 00:07:48.510 Max: 64 00:07:48.510 Min: 64 00:07:48.510 Completion Queue Entry Size 00:07:48.510 Max: 16 00:07:48.510 Min: 16 00:07:48.510 Number of Namespaces: 256 00:07:48.510 Compare Command: Supported 00:07:48.510 Write Uncorrectable Command: Not Supported 00:07:48.510 Dataset Management Command: Supported 00:07:48.511 Write Zeroes Command: Supported 00:07:48.511 Set Features Save Field: Supported 00:07:48.511 Reservations: Not Supported 00:07:48.511 Timestamp: Supported 00:07:48.511 Copy: Supported 00:07:48.511 Volatile Write Cache: Present 00:07:48.511 Atomic Write Unit (Normal): 1 00:07:48.511 Atomic Write Unit (PFail): 1 00:07:48.511 Atomic Compare & Write Unit: 1 00:07:48.511 Fused Compare & Write: Not Supported 00:07:48.511 Scatter-Gather List 00:07:48.511 SGL Command Set: Supported 00:07:48.511 SGL Keyed: Not Supported 00:07:48.511 SGL Bit Bucket Descriptor: Not Supported 00:07:48.511 SGL Metadata Pointer: Not Supported 00:07:48.511 Oversized SGL: Not Supported 00:07:48.511 SGL Metadata Address: Not Supported 00:07:48.511 SGL Offset: Not Supported 00:07:48.511 Transport SGL Data Block: Not Supported 00:07:48.511 Replay Protected Memory Block: Not Supported 00:07:48.511 00:07:48.511 Firmware Slot Information 00:07:48.511 ========================= 00:07:48.511 Active slot: 1 00:07:48.511 Slot 1 Firmware Revision: 1.0 00:07:48.511 00:07:48.511 00:07:48.511 Commands Supported and Effects 00:07:48.511 ============================== 00:07:48.511 Admin Commands 00:07:48.511 -------------- 00:07:48.511 Delete I/O Submission Queue (00h): Supported 00:07:48.511 Create I/O Submission Queue (01h): Supported 00:07:48.511 Get Log Page (02h): Supported 00:07:48.511 Delete I/O Completion Queue (04h): Supported 00:07:48.511 Create I/O Completion Queue (05h): Supported 00:07:48.511 Identify (06h): Supported 00:07:48.511 Abort (08h): Supported 00:07:48.511 Set Features (09h): Supported 00:07:48.511 Get Features (0Ah): Supported 00:07:48.511 Asynchronous Event Request (0Ch): Supported 00:07:48.511 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.511 Directive Send (19h): Supported 00:07:48.511 Directive Receive (1Ah): Supported 00:07:48.511 Virtualization Management (1Ch): Supported 00:07:48.511 Doorbell Buffer Config (7Ch): Supported 00:07:48.511 Format NVM (80h): Supported LBA-Change 00:07:48.511 I/O Commands 00:07:48.511 ------------ 00:07:48.511 Flush (00h): Supported LBA-Change 00:07:48.511 Write (01h): Supported LBA-Change 00:07:48.511 Read (02h): Supported 00:07:48.511 Compare (05h): Supported 00:07:48.511 Write Zeroes (08h): Supported LBA-Change 00:07:48.511 Dataset Management (09h): Supported LBA-Change 00:07:48.511 Unknown (0Ch): Supported 00:07:48.511 Unknown (12h): Supported 00:07:48.511 Copy (19h): Supported LBA-Change 00:07:48.511 Unknown (1Dh): Supported LBA-Change 00:07:48.511 00:07:48.511 Error Log 00:07:48.511 ========= 00:07:48.511 00:07:48.511 Arbitration 00:07:48.511 =========== 00:07:48.511 Arbitration Burst: no limit 00:07:48.511 00:07:48.511 Power Management 00:07:48.511 ================ 00:07:48.511 Number of Power States: 1 00:07:48.511 Current Power State: Power State #0 00:07:48.511 Power State #0: 00:07:48.511 Max Power: 25.00 W 00:07:48.511 Non-Operational State: Operational 00:07:48.511 Entry Latency: 16 microseconds 00:07:48.511 Exit Latency: 4 microseconds 00:07:48.511 Relative Read Throughput: 0 00:07:48.511 Relative Read Latency: 0 00:07:48.511 Relative Write Throughput: 0 00:07:48.511 Relative Write Latency: 0 00:07:48.511 Idle Power: Not Reported 00:07:48.511 Active Power: Not Reported 00:07:48.511 Non-Operational Permissive Mode: Not Supported 00:07:48.511 00:07:48.511 Health Information 00:07:48.511 ================== 00:07:48.511 Critical Warnings: 00:07:48.511 Available Spare Space: OK 00:07:48.511 Temperature: OK 00:07:48.511 Device Reliability: OK 00:07:48.511 Read Only: No 00:07:48.511 Volatile Memory Backup: OK 00:07:48.511 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.511 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.511 Available Spare: 0% 00:07:48.511 Available Spare Threshold: 0% 00:07:48.511 Life Percentage Used: 0% 00:07:48.511 Data Units Read: 918 00:07:48.511 Data Units Written: 847 00:07:48.511 Host Read Commands: 36286 00:07:48.511 Host Write Commands: 35709 00:07:48.511 Controller Busy Time: 0 minutes 00:07:48.511 Power Cycles: 0 00:07:48.511 Power On Hours: 0 hours 00:07:48.511 Unsafe Shutdowns: 0 00:07:48.511 Unrecoverable Media Errors: 0 00:07:48.511 Lifetime Error Log Entries: 0 00:07:48.511 Warning Temperature Time: 0 minutes 00:07:48.511 Critical Temperature Time: 0 minutes 00:07:48.511 00:07:48.511 Number of Queues 00:07:48.511 ================ 00:07:48.511 Number of I/O Submission Queues: 64 00:07:48.511 Number of I/O Completion Queues: 64 00:07:48.511 00:07:48.511 ZNS Specific Controller Data 00:07:48.511 ============================ 00:07:48.511 Zone Append Size Limit: 0 00:07:48.511 00:07:48.511 00:07:48.511 Active Namespaces 00:07:48.511 ================= 00:07:48.511 Namespace ID:1 00:07:48.511 Error Recovery Timeout: Unlimited 00:07:48.511 Command Set Identifier: NVM (00h) 00:07:48.511 Deallocate: Supported 00:07:48.511 Deallocated/Unwritten Error: Supported 00:07:48.511 Deallocated Read Value: All 0x00 00:07:48.511 Deallocate in Write Zeroes: Not Supported 00:07:48.511 Deallocated Guard Field: 0xFFFF 00:07:48.511 Flush: Supported 00:07:48.511 Reservation: Not Supported 00:07:48.511 Namespace Sharing Capabilities: Multiple Controllers 00:07:48.511 Size (in LBAs): 262144 (1GiB) 00:07:48.511 Capacity (in LBAs): 262144 (1GiB) 00:07:48.511 Utilization (in LBAs): 262144 (1GiB) 00:07:48.511 Thin Provisioning: Not Supported 00:07:48.511 Per-NS Atomic Units: No 00:07:48.511 Maximum Single Source Range Length: 128 00:07:48.511 Maximum Copy Length: 128 00:07:48.511 Maximum Source Range Count: 128 00:07:48.511 NGUID/EUI64 Never Reused: No 00:07:48.511 Namespace Write Protected: No 00:07:48.511 Endurance group ID: 1 00:07:48.511 Number of LBA Formats: 8 00:07:48.511 Current LBA Format: LBA Format #04 00:07:48.511 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.511 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.511 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.511 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.511 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.511 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.511 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.511 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.511 00:07:48.511 Get Feature FDP: 00:07:48.511 ================ 00:07:48.511 Enabled: Yes 00:07:48.511 FDP configuration index: 0 00:07:48.511 00:07:48.511 FDP configurations log page 00:07:48.511 =========================== 00:07:48.511 Number of FDP configurations: 1 00:07:48.511 Version: 0 00:07:48.511 Size: 112 00:07:48.511 FDP Configuration Descriptor: 0 00:07:48.511 Descriptor Size: 96 00:07:48.511 Reclaim Group Identifier format: 2 00:07:48.511 FDP Volatile Write Cache: Not Present 00:07:48.511 FDP Configuration: Valid 00:07:48.511 Vendor Specific Size: 0 00:07:48.511 Number of Reclaim Groups: 2 00:07:48.511 Number of Recalim Unit Handles: 8 00:07:48.511 Max Placement Identifiers: 128 00:07:48.511 Number of Namespaces Suppprted: 256 00:07:48.511 Reclaim unit Nominal Size: 6000000 bytes 00:07:48.511 Estimated Reclaim Unit Time Limit: Not Reported 00:07:48.511 RUH Desc #000: RUH Type: Initially Isolated 00:07:48.511 RUH Desc #001: RUH Type: Initially Isolated 00:07:48.511 RUH Desc #002: RUH Type: Initially Isolated 00:07:48.511 RUH Desc #003: RUH Type: Initially Isolated 00:07:48.511 RUH Desc #004: RUH Type: Initially Isolated 00:07:48.511 RUH Desc #005: RUH Type: Initially Isolated 00:07:48.511 RUH Desc #006: RUH Type: Initially Isolated 00:07:48.511 RUH Desc #007: RUH Type: Initially Isolated 00:07:48.511 00:07:48.511 FDP reclaim unit handle usage log page 00:07:48.511 ====================================== 00:07:48.511 Number of Reclaim Unit Handles: 8 00:07:48.511 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:48.511 RUH Usage Desc #001: RUH Attributes: Unused 00:07:48.511 RUH Usage Desc #002: RUH Attributes: Unused 00:07:48.511 RUH Usage Desc #003: RUH Attributes: Unused 00:07:48.511 RUH Usage Desc #004: RUH Attributes: Unused 00:07:48.511 RUH Usage Desc #005: RUH Attributes: Unused 00:07:48.511 RUH Usage Desc #006: RUH Attributes: Unused 00:07:48.511 RUH Usage Desc #007: RUH Attributes: Unused 00:07:48.511 00:07:48.511 FDP statistics log page 00:07:48.511 ======================= 00:07:48.511 Host bytes with metadata written: 521867264 00:07:48.511 Media bytes with metadata written: 521961472 00:07:48.511 Media bytes erased: 0 00:07:48.511 00:07:48.511 FDP events log page 00:07:48.511 =================== 00:07:48.512 Number of FDP events: 0 00:07:48.512 00:07:48.512 NVM Specific Namespace Data 00:07:48.512 =========================== 00:07:48.512 Logical Block Storage Tag Mask: 0 00:07:48.512 Protection Information Capabilities: 00:07:48.512 16b Guard Protection Information Storage Tag Support: No 00:07:48.512 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.512 Storage Tag Check Read Support: No 00:07:48.512 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.512 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.512 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.512 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.512 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.512 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.512 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.512 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.512 00:07:48.512 real 0m1.128s 00:07:48.512 user 0m0.391s 00:07:48.512 sys 0m0.510s 00:07:48.512 13:42:02 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.512 13:42:02 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:48.512 ************************************ 00:07:48.512 END TEST nvme_identify 00:07:48.512 ************************************ 00:07:48.512 13:42:02 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:48.512 13:42:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.512 13:42:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.512 13:42:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.512 ************************************ 00:07:48.512 START TEST nvme_perf 00:07:48.512 ************************************ 00:07:48.512 13:42:02 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:48.512 13:42:02 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:49.897 Initializing NVMe Controllers 00:07:49.897 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:49.897 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:49.897 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:49.897 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:49.897 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:49.897 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:49.897 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:49.897 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:49.897 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:49.897 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:49.897 Initialization complete. Launching workers. 00:07:49.897 ======================================================== 00:07:49.897 Latency(us) 00:07:49.897 Device Information : IOPS MiB/s Average min max 00:07:49.897 PCIE (0000:00:11.0) NSID 1 from core 0: 16088.25 188.53 7967.34 5761.16 34303.23 00:07:49.897 PCIE (0000:00:13.0) NSID 1 from core 0: 16088.25 188.53 7956.01 5811.96 32898.89 00:07:49.897 PCIE (0000:00:10.0) NSID 1 from core 0: 16088.25 188.53 7943.29 5715.42 31466.05 00:07:49.897 PCIE (0000:00:12.0) NSID 1 from core 0: 16088.25 188.53 7931.83 5811.52 29756.17 00:07:49.897 PCIE (0000:00:12.0) NSID 2 from core 0: 16088.25 188.53 7919.78 5857.41 28822.98 00:07:49.897 PCIE (0000:00:12.0) NSID 3 from core 0: 16152.09 189.28 7876.47 5835.27 22596.19 00:07:49.897 ======================================================== 00:07:49.897 Total : 96593.32 1131.95 7932.42 5715.42 34303.23 00:07:49.897 00:07:49.897 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:49.897 ================================================================================= 00:07:49.897 1.00000% : 6024.271us 00:07:49.897 10.00000% : 6326.745us 00:07:49.897 25.00000% : 6704.837us 00:07:49.897 50.00000% : 8065.969us 00:07:49.897 75.00000% : 8620.505us 00:07:49.897 90.00000% : 9074.215us 00:07:49.897 95.00000% : 9729.575us 00:07:49.897 98.00000% : 11292.357us 00:07:49.897 99.00000% : 12199.778us 00:07:49.897 99.50000% : 28432.542us 00:07:49.897 99.90000% : 34078.720us 00:07:49.897 99.99000% : 34280.369us 00:07:49.897 99.99900% : 34482.018us 00:07:49.897 99.99990% : 34482.018us 00:07:49.897 99.99999% : 34482.018us 00:07:49.897 00:07:49.897 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:49.897 ================================================================================= 00:07:49.897 1.00000% : 6024.271us 00:07:49.897 10.00000% : 6301.538us 00:07:49.897 25.00000% : 6704.837us 00:07:49.897 50.00000% : 8065.969us 00:07:49.897 75.00000% : 8570.092us 00:07:49.897 90.00000% : 9074.215us 00:07:49.897 95.00000% : 9779.988us 00:07:49.897 98.00000% : 11342.769us 00:07:49.897 99.00000% : 12351.015us 00:07:49.898 99.50000% : 27020.997us 00:07:49.898 99.90000% : 32667.175us 00:07:49.898 99.99000% : 33070.474us 00:07:49.898 99.99900% : 33070.474us 00:07:49.898 99.99990% : 33070.474us 00:07:49.898 99.99999% : 33070.474us 00:07:49.898 00:07:49.898 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:49.898 ================================================================================= 00:07:49.898 1.00000% : 5948.652us 00:07:49.898 10.00000% : 6276.332us 00:07:49.898 25.00000% : 6704.837us 00:07:49.898 50.00000% : 8015.557us 00:07:49.898 75.00000% : 8620.505us 00:07:49.898 90.00000% : 9124.628us 00:07:49.898 95.00000% : 9779.988us 00:07:49.898 98.00000% : 11191.532us 00:07:49.898 99.00000% : 12451.840us 00:07:49.898 99.50000% : 25407.803us 00:07:49.898 99.90000% : 31255.631us 00:07:49.898 99.99000% : 31457.280us 00:07:49.898 99.99900% : 31658.929us 00:07:49.898 99.99990% : 31658.929us 00:07:49.898 99.99999% : 31658.929us 00:07:49.898 00:07:49.898 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:49.898 ================================================================================= 00:07:49.898 1.00000% : 5999.065us 00:07:49.898 10.00000% : 6301.538us 00:07:49.898 25.00000% : 6654.425us 00:07:49.898 50.00000% : 8065.969us 00:07:49.898 75.00000% : 8620.505us 00:07:49.898 90.00000% : 9124.628us 00:07:49.898 95.00000% : 9779.988us 00:07:49.898 98.00000% : 11141.120us 00:07:49.898 99.00000% : 12250.191us 00:07:49.898 99.50000% : 23693.785us 00:07:49.898 99.90000% : 29440.788us 00:07:49.898 99.99000% : 29844.086us 00:07:49.898 99.99900% : 29844.086us 00:07:49.898 99.99990% : 29844.086us 00:07:49.898 99.99999% : 29844.086us 00:07:49.898 00:07:49.898 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:49.898 ================================================================================= 00:07:49.898 1.00000% : 6024.271us 00:07:49.898 10.00000% : 6301.538us 00:07:49.898 25.00000% : 6654.425us 00:07:49.898 50.00000% : 8065.969us 00:07:49.898 75.00000% : 8570.092us 00:07:49.898 90.00000% : 9074.215us 00:07:49.898 95.00000% : 9779.988us 00:07:49.898 98.00000% : 11191.532us 00:07:49.898 99.00000% : 12401.428us 00:07:49.898 99.50000% : 22988.012us 00:07:49.898 99.90000% : 28634.191us 00:07:49.898 99.99000% : 28835.840us 00:07:49.898 99.99900% : 28835.840us 00:07:49.898 99.99990% : 28835.840us 00:07:49.898 99.99999% : 28835.840us 00:07:49.898 00:07:49.898 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:49.898 ================================================================================= 00:07:49.898 1.00000% : 6024.271us 00:07:49.898 10.00000% : 6301.538us 00:07:49.898 25.00000% : 6704.837us 00:07:49.898 50.00000% : 8065.969us 00:07:49.898 75.00000% : 8620.505us 00:07:49.898 90.00000% : 9074.215us 00:07:49.898 95.00000% : 9679.163us 00:07:49.898 98.00000% : 11393.182us 00:07:49.898 99.00000% : 12300.603us 00:07:49.898 99.50000% : 16636.062us 00:07:49.898 99.90000% : 22282.240us 00:07:49.898 99.99000% : 22584.714us 00:07:49.898 99.99900% : 22685.538us 00:07:49.898 99.99990% : 22685.538us 00:07:49.898 99.99999% : 22685.538us 00:07:49.898 00:07:49.898 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:49.898 ============================================================================== 00:07:49.898 Range in us Cumulative IO count 00:07:49.898 5747.003 - 5772.209: 0.0062% ( 1) 00:07:49.898 5772.209 - 5797.415: 0.0248% ( 3) 00:07:49.898 5797.415 - 5822.622: 0.0434% ( 3) 00:07:49.898 5822.622 - 5847.828: 0.0806% ( 6) 00:07:49.898 5847.828 - 5873.034: 0.1488% ( 11) 00:07:49.898 5873.034 - 5898.240: 0.2108% ( 10) 00:07:49.898 5898.240 - 5923.446: 0.3038% ( 15) 00:07:49.898 5923.446 - 5948.652: 0.4588% ( 25) 00:07:49.898 5948.652 - 5973.858: 0.6448% ( 30) 00:07:49.898 5973.858 - 5999.065: 0.8929% ( 40) 00:07:49.898 5999.065 - 6024.271: 1.2463% ( 57) 00:07:49.898 6024.271 - 6049.477: 1.7857% ( 87) 00:07:49.898 6049.477 - 6074.683: 2.4244% ( 103) 00:07:49.898 6074.683 - 6099.889: 3.0196% ( 96) 00:07:49.898 6099.889 - 6125.095: 3.7202% ( 113) 00:07:49.898 6125.095 - 6150.302: 4.5325% ( 131) 00:07:49.898 6150.302 - 6175.508: 5.3075% ( 125) 00:07:49.898 6175.508 - 6200.714: 6.1880% ( 142) 00:07:49.898 6200.714 - 6225.920: 7.0250% ( 135) 00:07:49.898 6225.920 - 6251.126: 7.9799% ( 154) 00:07:49.898 6251.126 - 6276.332: 8.9720% ( 160) 00:07:49.898 6276.332 - 6301.538: 9.9764% ( 162) 00:07:49.898 6301.538 - 6326.745: 11.0181% ( 168) 00:07:49.898 6326.745 - 6351.951: 12.0102% ( 160) 00:07:49.898 6351.951 - 6377.157: 13.1138% ( 178) 00:07:49.898 6377.157 - 6402.363: 14.1493% ( 167) 00:07:49.898 6402.363 - 6427.569: 15.1476% ( 161) 00:07:49.898 6427.569 - 6452.775: 16.1458% ( 161) 00:07:49.898 6452.775 - 6503.188: 18.3036% ( 348) 00:07:49.898 6503.188 - 6553.600: 20.4365% ( 344) 00:07:49.898 6553.600 - 6604.012: 22.6066% ( 350) 00:07:49.898 6604.012 - 6654.425: 24.7024% ( 338) 00:07:49.898 6654.425 - 6704.837: 26.8725% ( 350) 00:07:49.898 6704.837 - 6755.249: 28.9125% ( 329) 00:07:49.898 6755.249 - 6805.662: 30.9090% ( 322) 00:07:49.898 6805.662 - 6856.074: 32.6203% ( 276) 00:07:49.898 6856.074 - 6906.486: 33.9596% ( 216) 00:07:49.898 6906.486 - 6956.898: 35.0880% ( 182) 00:07:49.898 6956.898 - 7007.311: 35.9003% ( 131) 00:07:49.898 7007.311 - 7057.723: 36.5327% ( 102) 00:07:49.898 7057.723 - 7108.135: 36.9978% ( 75) 00:07:49.898 7108.135 - 7158.548: 37.3760% ( 61) 00:07:49.898 7158.548 - 7208.960: 37.6984% ( 52) 00:07:49.898 7208.960 - 7259.372: 38.0084% ( 50) 00:07:49.898 7259.372 - 7309.785: 38.3495% ( 55) 00:07:49.898 7309.785 - 7360.197: 38.7029% ( 57) 00:07:49.898 7360.197 - 7410.609: 39.0005% ( 48) 00:07:49.898 7410.609 - 7461.022: 39.3477% ( 56) 00:07:49.898 7461.022 - 7511.434: 39.7693% ( 68) 00:07:49.898 7511.434 - 7561.846: 40.2034% ( 70) 00:07:49.898 7561.846 - 7612.258: 40.6746% ( 76) 00:07:49.898 7612.258 - 7662.671: 41.2326% ( 90) 00:07:49.898 7662.671 - 7713.083: 41.8775% ( 104) 00:07:49.898 7713.083 - 7763.495: 42.8385% ( 155) 00:07:49.898 7763.495 - 7813.908: 43.8988% ( 171) 00:07:49.898 7813.908 - 7864.320: 45.2877% ( 224) 00:07:49.898 7864.320 - 7914.732: 46.6270% ( 216) 00:07:49.898 7914.732 - 7965.145: 48.1213% ( 241) 00:07:49.898 7965.145 - 8015.557: 49.9504% ( 295) 00:07:49.898 8015.557 - 8065.969: 51.8601% ( 308) 00:07:49.898 8065.969 - 8116.382: 53.8876% ( 327) 00:07:49.898 8116.382 - 8166.794: 55.9648% ( 335) 00:07:49.898 8166.794 - 8217.206: 58.1721% ( 356) 00:07:49.898 8217.206 - 8267.618: 60.4787% ( 372) 00:07:49.898 8267.618 - 8318.031: 62.6736% ( 354) 00:07:49.898 8318.031 - 8368.443: 65.1228% ( 395) 00:07:49.898 8368.443 - 8418.855: 67.5843% ( 397) 00:07:49.898 8418.855 - 8469.268: 70.1141% ( 408) 00:07:49.898 8469.268 - 8519.680: 72.6190% ( 404) 00:07:49.898 8519.680 - 8570.092: 74.9194% ( 371) 00:07:49.898 8570.092 - 8620.505: 77.0709% ( 347) 00:07:49.898 8620.505 - 8670.917: 79.0737% ( 323) 00:07:49.898 8670.917 - 8721.329: 80.8594% ( 288) 00:07:49.898 8721.329 - 8771.742: 82.6327% ( 286) 00:07:49.898 8771.742 - 8822.154: 84.2014% ( 253) 00:07:49.898 8822.154 - 8872.566: 85.6337% ( 231) 00:07:49.898 8872.566 - 8922.978: 86.8924% ( 203) 00:07:49.898 8922.978 - 8973.391: 88.0456% ( 186) 00:07:49.898 8973.391 - 9023.803: 89.0687% ( 165) 00:07:49.898 9023.803 - 9074.215: 90.0050% ( 151) 00:07:49.898 9074.215 - 9124.628: 90.8482% ( 136) 00:07:49.898 9124.628 - 9175.040: 91.5365% ( 111) 00:07:49.898 9175.040 - 9225.452: 92.1503% ( 99) 00:07:49.898 9225.452 - 9275.865: 92.7207% ( 92) 00:07:49.898 9275.865 - 9326.277: 93.1672% ( 72) 00:07:49.898 9326.277 - 9376.689: 93.5268% ( 58) 00:07:49.898 9376.689 - 9427.102: 93.8306% ( 49) 00:07:49.898 9427.102 - 9477.514: 94.1220% ( 47) 00:07:49.898 9477.514 - 9527.926: 94.3390% ( 35) 00:07:49.898 9527.926 - 9578.338: 94.5250% ( 30) 00:07:49.898 9578.338 - 9628.751: 94.7173% ( 31) 00:07:49.898 9628.751 - 9679.163: 94.9033% ( 30) 00:07:49.898 9679.163 - 9729.575: 95.0769% ( 28) 00:07:49.898 9729.575 - 9779.988: 95.2629% ( 30) 00:07:49.898 9779.988 - 9830.400: 95.3993% ( 22) 00:07:49.898 9830.400 - 9880.812: 95.5667% ( 27) 00:07:49.898 9880.812 - 9931.225: 95.7155% ( 24) 00:07:49.898 9931.225 - 9981.637: 95.8643% ( 24) 00:07:49.898 9981.637 - 10032.049: 95.9945% ( 21) 00:07:49.898 10032.049 - 10082.462: 96.0938% ( 16) 00:07:49.898 10082.462 - 10132.874: 96.2178% ( 20) 00:07:49.898 10132.874 - 10183.286: 96.3294% ( 18) 00:07:49.898 10183.286 - 10233.698: 96.4472% ( 19) 00:07:49.898 10233.698 - 10284.111: 96.5216% ( 12) 00:07:49.898 10284.111 - 10334.523: 96.5836% ( 10) 00:07:49.898 10334.523 - 10384.935: 96.6766% ( 15) 00:07:49.898 10384.935 - 10435.348: 96.7634% ( 14) 00:07:49.898 10435.348 - 10485.760: 96.8254% ( 10) 00:07:49.898 10485.760 - 10536.172: 96.8998% ( 12) 00:07:49.898 10536.172 - 10586.585: 96.9742% ( 12) 00:07:49.898 10586.585 - 10636.997: 97.0486% ( 12) 00:07:49.898 10636.997 - 10687.409: 97.1106% ( 10) 00:07:49.898 10687.409 - 10737.822: 97.1974% ( 14) 00:07:49.898 10737.822 - 10788.234: 97.2842% ( 14) 00:07:49.898 10788.234 - 10838.646: 97.3462% ( 10) 00:07:49.898 10838.646 - 10889.058: 97.4082% ( 10) 00:07:49.898 10889.058 - 10939.471: 97.4640% ( 9) 00:07:49.898 10939.471 - 10989.883: 97.5384% ( 12) 00:07:49.898 10989.883 - 11040.295: 97.6252% ( 14) 00:07:49.898 11040.295 - 11090.708: 97.7493% ( 20) 00:07:49.898 11090.708 - 11141.120: 97.8113% ( 10) 00:07:49.898 11141.120 - 11191.532: 97.8981% ( 14) 00:07:49.898 11191.532 - 11241.945: 97.9477% ( 8) 00:07:49.898 11241.945 - 11292.357: 98.0035% ( 9) 00:07:49.898 11292.357 - 11342.769: 98.0407% ( 6) 00:07:49.898 11342.769 - 11393.182: 98.0841% ( 7) 00:07:49.898 11393.182 - 11443.594: 98.1213% ( 6) 00:07:49.898 11443.594 - 11494.006: 98.1709% ( 8) 00:07:49.898 11494.006 - 11544.418: 98.2391% ( 11) 00:07:49.898 11544.418 - 11594.831: 98.2887% ( 8) 00:07:49.898 11594.831 - 11645.243: 98.3631% ( 12) 00:07:49.898 11645.243 - 11695.655: 98.4189% ( 9) 00:07:49.898 11695.655 - 11746.068: 98.4809% ( 10) 00:07:49.898 11746.068 - 11796.480: 98.5429% ( 10) 00:07:49.898 11796.480 - 11846.892: 98.6173% ( 12) 00:07:49.898 11846.892 - 11897.305: 98.6793% ( 10) 00:07:49.898 11897.305 - 11947.717: 98.7289% ( 8) 00:07:49.898 11947.717 - 11998.129: 98.7909% ( 10) 00:07:49.899 11998.129 - 12048.542: 98.8467% ( 9) 00:07:49.899 12048.542 - 12098.954: 98.9025% ( 9) 00:07:49.899 12098.954 - 12149.366: 98.9645% ( 10) 00:07:49.899 12149.366 - 12199.778: 99.0141% ( 8) 00:07:49.899 12199.778 - 12250.191: 99.0575% ( 7) 00:07:49.899 12250.191 - 12300.603: 99.0947% ( 6) 00:07:49.899 12300.603 - 12351.015: 99.1257% ( 5) 00:07:49.899 12351.015 - 12401.428: 99.1505% ( 4) 00:07:49.899 12401.428 - 12451.840: 99.1753% ( 4) 00:07:49.899 12451.840 - 12502.252: 99.1877% ( 2) 00:07:49.899 12502.252 - 12552.665: 99.1939% ( 1) 00:07:49.899 12552.665 - 12603.077: 99.2001% ( 1) 00:07:49.899 12603.077 - 12653.489: 99.2063% ( 1) 00:07:49.899 27020.997 - 27222.646: 99.2250% ( 3) 00:07:49.899 27222.646 - 27424.295: 99.2746% ( 8) 00:07:49.899 27424.295 - 27625.945: 99.3304% ( 9) 00:07:49.899 27625.945 - 27827.594: 99.3862% ( 9) 00:07:49.899 27827.594 - 28029.243: 99.4358% ( 8) 00:07:49.899 28029.243 - 28230.892: 99.4916% ( 9) 00:07:49.899 28230.892 - 28432.542: 99.5412% ( 8) 00:07:49.899 28432.542 - 28634.191: 99.5970% ( 9) 00:07:49.899 28634.191 - 28835.840: 99.6032% ( 1) 00:07:49.899 32667.175 - 32868.825: 99.6218% ( 3) 00:07:49.899 32868.825 - 33070.474: 99.6714% ( 8) 00:07:49.899 33070.474 - 33272.123: 99.7272% ( 9) 00:07:49.899 33272.123 - 33473.772: 99.7830% ( 9) 00:07:49.899 33473.772 - 33675.422: 99.8388% ( 9) 00:07:49.899 33675.422 - 33877.071: 99.8884% ( 8) 00:07:49.899 33877.071 - 34078.720: 99.9442% ( 9) 00:07:49.899 34078.720 - 34280.369: 99.9938% ( 8) 00:07:49.899 34280.369 - 34482.018: 100.0000% ( 1) 00:07:49.899 00:07:49.899 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:49.899 ============================================================================== 00:07:49.899 Range in us Cumulative IO count 00:07:49.899 5797.415 - 5822.622: 0.0186% ( 3) 00:07:49.899 5822.622 - 5847.828: 0.1488% ( 21) 00:07:49.899 5847.828 - 5873.034: 0.1860% ( 6) 00:07:49.899 5873.034 - 5898.240: 0.2418% ( 9) 00:07:49.899 5898.240 - 5923.446: 0.3410% ( 16) 00:07:49.899 5923.446 - 5948.652: 0.4030% ( 10) 00:07:49.899 5948.652 - 5973.858: 0.6200% ( 35) 00:07:49.899 5973.858 - 5999.065: 0.8929% ( 44) 00:07:49.899 5999.065 - 6024.271: 1.3579% ( 75) 00:07:49.899 6024.271 - 6049.477: 1.9159% ( 90) 00:07:49.899 6049.477 - 6074.683: 2.4802% ( 91) 00:07:49.899 6074.683 - 6099.889: 3.1312% ( 105) 00:07:49.899 6099.889 - 6125.095: 3.8008% ( 108) 00:07:49.899 6125.095 - 6150.302: 4.5883% ( 127) 00:07:49.899 6150.302 - 6175.508: 5.3385% ( 121) 00:07:49.899 6175.508 - 6200.714: 6.1198% ( 126) 00:07:49.899 6200.714 - 6225.920: 6.9940% ( 141) 00:07:49.899 6225.920 - 6251.126: 7.9923% ( 161) 00:07:49.899 6251.126 - 6276.332: 9.0216% ( 166) 00:07:49.899 6276.332 - 6301.538: 10.0260% ( 162) 00:07:49.899 6301.538 - 6326.745: 11.0677% ( 168) 00:07:49.899 6326.745 - 6351.951: 12.0226% ( 154) 00:07:49.899 6351.951 - 6377.157: 13.0394% ( 164) 00:07:49.899 6377.157 - 6402.363: 14.0625% ( 165) 00:07:49.899 6402.363 - 6427.569: 15.0546% ( 160) 00:07:49.899 6427.569 - 6452.775: 16.0962% ( 168) 00:07:49.899 6452.775 - 6503.188: 18.3098% ( 357) 00:07:49.899 6503.188 - 6553.600: 20.5481% ( 361) 00:07:49.899 6553.600 - 6604.012: 22.7059% ( 348) 00:07:49.899 6604.012 - 6654.425: 24.8388% ( 344) 00:07:49.899 6654.425 - 6704.837: 26.8725% ( 328) 00:07:49.899 6704.837 - 6755.249: 28.9125% ( 329) 00:07:49.899 6755.249 - 6805.662: 30.8160% ( 307) 00:07:49.899 6805.662 - 6856.074: 32.4591% ( 265) 00:07:49.899 6856.074 - 6906.486: 33.7736% ( 212) 00:07:49.899 6906.486 - 6956.898: 34.8710% ( 177) 00:07:49.899 6956.898 - 7007.311: 35.7143% ( 136) 00:07:49.899 7007.311 - 7057.723: 36.3591% ( 104) 00:07:49.899 7057.723 - 7108.135: 36.8490% ( 79) 00:07:49.899 7108.135 - 7158.548: 37.2396% ( 63) 00:07:49.899 7158.548 - 7208.960: 37.5868% ( 56) 00:07:49.899 7208.960 - 7259.372: 37.9402% ( 57) 00:07:49.899 7259.372 - 7309.785: 38.2564% ( 51) 00:07:49.899 7309.785 - 7360.197: 38.6099% ( 57) 00:07:49.899 7360.197 - 7410.609: 38.9261% ( 51) 00:07:49.899 7410.609 - 7461.022: 39.2299% ( 49) 00:07:49.899 7461.022 - 7511.434: 39.7383% ( 82) 00:07:49.899 7511.434 - 7561.846: 40.1166% ( 61) 00:07:49.899 7561.846 - 7612.258: 40.5382% ( 68) 00:07:49.899 7612.258 - 7662.671: 41.1768% ( 103) 00:07:49.899 7662.671 - 7713.083: 41.9271% ( 121) 00:07:49.899 7713.083 - 7763.495: 42.7641% ( 135) 00:07:49.899 7763.495 - 7813.908: 44.0104% ( 201) 00:07:49.899 7813.908 - 7864.320: 45.2691% ( 203) 00:07:49.899 7864.320 - 7914.732: 46.5526% ( 207) 00:07:49.899 7914.732 - 7965.145: 47.9353% ( 223) 00:07:49.899 7965.145 - 8015.557: 49.5846% ( 266) 00:07:49.899 8015.557 - 8065.969: 51.3951% ( 292) 00:07:49.899 8065.969 - 8116.382: 53.5590% ( 349) 00:07:49.899 8116.382 - 8166.794: 55.7416% ( 352) 00:07:49.899 8166.794 - 8217.206: 58.0667% ( 375) 00:07:49.899 8217.206 - 8267.618: 60.4291% ( 381) 00:07:49.899 8267.618 - 8318.031: 62.8596% ( 392) 00:07:49.899 8318.031 - 8368.443: 65.2592% ( 387) 00:07:49.899 8368.443 - 8418.855: 67.7455% ( 401) 00:07:49.899 8418.855 - 8469.268: 70.3373% ( 418) 00:07:49.899 8469.268 - 8519.680: 72.8051% ( 398) 00:07:49.899 8519.680 - 8570.092: 75.1116% ( 372) 00:07:49.899 8570.092 - 8620.505: 77.2507% ( 345) 00:07:49.899 8620.505 - 8670.917: 79.3403% ( 337) 00:07:49.899 8670.917 - 8721.329: 81.2190% ( 303) 00:07:49.899 8721.329 - 8771.742: 82.9861% ( 285) 00:07:49.899 8771.742 - 8822.154: 84.6540% ( 269) 00:07:49.899 8822.154 - 8872.566: 85.9437% ( 208) 00:07:49.899 8872.566 - 8922.978: 87.0908% ( 185) 00:07:49.899 8922.978 - 8973.391: 88.3123% ( 197) 00:07:49.899 8973.391 - 9023.803: 89.3539% ( 168) 00:07:49.899 9023.803 - 9074.215: 90.1414% ( 127) 00:07:49.899 9074.215 - 9124.628: 90.8482% ( 114) 00:07:49.899 9124.628 - 9175.040: 91.4683% ( 100) 00:07:49.899 9175.040 - 9225.452: 92.0883% ( 100) 00:07:49.899 9225.452 - 9275.865: 92.5533% ( 75) 00:07:49.899 9275.865 - 9326.277: 92.9315% ( 61) 00:07:49.899 9326.277 - 9376.689: 93.2850% ( 57) 00:07:49.899 9376.689 - 9427.102: 93.5950% ( 50) 00:07:49.899 9427.102 - 9477.514: 93.9112% ( 51) 00:07:49.899 9477.514 - 9527.926: 94.2150% ( 49) 00:07:49.899 9527.926 - 9578.338: 94.4072% ( 31) 00:07:49.899 9578.338 - 9628.751: 94.5871% ( 29) 00:07:49.899 9628.751 - 9679.163: 94.7979% ( 34) 00:07:49.899 9679.163 - 9729.575: 94.9839% ( 30) 00:07:49.899 9729.575 - 9779.988: 95.1575% ( 28) 00:07:49.899 9779.988 - 9830.400: 95.3249% ( 27) 00:07:49.899 9830.400 - 9880.812: 95.5233% ( 32) 00:07:49.899 9880.812 - 9931.225: 95.6969% ( 28) 00:07:49.899 9931.225 - 9981.637: 95.8395% ( 23) 00:07:49.899 9981.637 - 10032.049: 96.0069% ( 27) 00:07:49.899 10032.049 - 10082.462: 96.1496% ( 23) 00:07:49.899 10082.462 - 10132.874: 96.2860% ( 22) 00:07:49.899 10132.874 - 10183.286: 96.3914% ( 17) 00:07:49.899 10183.286 - 10233.698: 96.4658% ( 12) 00:07:49.899 10233.698 - 10284.111: 96.5526% ( 14) 00:07:49.899 10284.111 - 10334.523: 96.6270% ( 12) 00:07:49.899 10334.523 - 10384.935: 96.7262% ( 16) 00:07:49.899 10384.935 - 10435.348: 96.8440% ( 19) 00:07:49.899 10435.348 - 10485.760: 96.9742% ( 21) 00:07:49.899 10485.760 - 10536.172: 97.0672% ( 15) 00:07:49.899 10536.172 - 10586.585: 97.1478% ( 13) 00:07:49.899 10586.585 - 10636.997: 97.2346% ( 14) 00:07:49.899 10636.997 - 10687.409: 97.3090% ( 12) 00:07:49.899 10687.409 - 10737.822: 97.4206% ( 18) 00:07:49.899 10737.822 - 10788.234: 97.4950% ( 12) 00:07:49.899 10788.234 - 10838.646: 97.5508% ( 9) 00:07:49.899 10838.646 - 10889.058: 97.5942% ( 7) 00:07:49.899 10889.058 - 10939.471: 97.6438% ( 8) 00:07:49.899 10939.471 - 10989.883: 97.6997% ( 9) 00:07:49.899 10989.883 - 11040.295: 97.7617% ( 10) 00:07:49.899 11040.295 - 11090.708: 97.8237% ( 10) 00:07:49.899 11090.708 - 11141.120: 97.8671% ( 7) 00:07:49.899 11141.120 - 11191.532: 97.9043% ( 6) 00:07:49.899 11191.532 - 11241.945: 97.9415% ( 6) 00:07:49.899 11241.945 - 11292.357: 97.9911% ( 8) 00:07:49.899 11292.357 - 11342.769: 98.0531% ( 10) 00:07:49.899 11342.769 - 11393.182: 98.1213% ( 11) 00:07:49.899 11393.182 - 11443.594: 98.2019% ( 13) 00:07:49.899 11443.594 - 11494.006: 98.2825% ( 13) 00:07:49.899 11494.006 - 11544.418: 98.3507% ( 11) 00:07:49.899 11544.418 - 11594.831: 98.4127% ( 10) 00:07:49.899 11594.831 - 11645.243: 98.4437% ( 5) 00:07:49.899 11645.243 - 11695.655: 98.5119% ( 11) 00:07:49.899 11695.655 - 11746.068: 98.5801% ( 11) 00:07:49.899 11746.068 - 11796.480: 98.6359% ( 9) 00:07:49.899 11796.480 - 11846.892: 98.6793% ( 7) 00:07:49.899 11846.892 - 11897.305: 98.7289% ( 8) 00:07:49.899 11897.305 - 11947.717: 98.7785% ( 8) 00:07:49.899 11947.717 - 11998.129: 98.8033% ( 4) 00:07:49.899 11998.129 - 12048.542: 98.8281% ( 4) 00:07:49.899 12048.542 - 12098.954: 98.8777% ( 8) 00:07:49.899 12098.954 - 12149.366: 98.9149% ( 6) 00:07:49.899 12149.366 - 12199.778: 98.9397% ( 4) 00:07:49.899 12199.778 - 12250.191: 98.9707% ( 5) 00:07:49.899 12250.191 - 12300.603: 98.9893% ( 3) 00:07:49.899 12300.603 - 12351.015: 99.0203% ( 5) 00:07:49.899 12351.015 - 12401.428: 99.0389% ( 3) 00:07:49.899 12401.428 - 12451.840: 99.0699% ( 5) 00:07:49.899 12451.840 - 12502.252: 99.0885% ( 3) 00:07:49.899 12502.252 - 12552.665: 99.1195% ( 5) 00:07:49.899 12552.665 - 12603.077: 99.1443% ( 4) 00:07:49.899 12603.077 - 12653.489: 99.1629% ( 3) 00:07:49.899 12653.489 - 12703.902: 99.1753% ( 2) 00:07:49.899 12703.902 - 12754.314: 99.1877% ( 2) 00:07:49.899 12754.314 - 12804.726: 99.2001% ( 2) 00:07:49.899 12804.726 - 12855.138: 99.2063% ( 1) 00:07:49.899 25710.277 - 25811.102: 99.2125% ( 1) 00:07:49.899 25811.102 - 26012.751: 99.2622% ( 8) 00:07:49.899 26012.751 - 26214.400: 99.3118% ( 8) 00:07:49.899 26214.400 - 26416.049: 99.3676% ( 9) 00:07:49.899 26416.049 - 26617.698: 99.4172% ( 8) 00:07:49.899 26617.698 - 26819.348: 99.4668% ( 8) 00:07:49.899 26819.348 - 27020.997: 99.5164% ( 8) 00:07:49.899 27020.997 - 27222.646: 99.5660% ( 8) 00:07:49.899 27222.646 - 27424.295: 99.6032% ( 6) 00:07:49.899 31255.631 - 31457.280: 99.6218% ( 3) 00:07:49.899 31457.280 - 31658.929: 99.6714% ( 8) 00:07:49.899 31658.929 - 31860.578: 99.7210% ( 8) 00:07:49.899 31860.578 - 32062.228: 99.7768% ( 9) 00:07:49.899 32062.228 - 32263.877: 99.8326% ( 9) 00:07:49.900 32263.877 - 32465.526: 99.8822% ( 8) 00:07:49.900 32465.526 - 32667.175: 99.9318% ( 8) 00:07:49.900 32667.175 - 32868.825: 99.9876% ( 9) 00:07:49.900 32868.825 - 33070.474: 100.0000% ( 2) 00:07:49.900 00:07:49.900 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:49.900 ============================================================================== 00:07:49.900 Range in us Cumulative IO count 00:07:49.900 5696.591 - 5721.797: 0.0062% ( 1) 00:07:49.900 5721.797 - 5747.003: 0.0434% ( 6) 00:07:49.900 5747.003 - 5772.209: 0.0620% ( 3) 00:07:49.900 5772.209 - 5797.415: 0.1178% ( 9) 00:07:49.900 5797.415 - 5822.622: 0.2480% ( 21) 00:07:49.900 5822.622 - 5847.828: 0.3472% ( 16) 00:07:49.900 5847.828 - 5873.034: 0.4898% ( 23) 00:07:49.900 5873.034 - 5898.240: 0.6882% ( 32) 00:07:49.900 5898.240 - 5923.446: 0.9487% ( 42) 00:07:49.900 5923.446 - 5948.652: 1.2463% ( 48) 00:07:49.900 5948.652 - 5973.858: 1.6617% ( 67) 00:07:49.900 5973.858 - 5999.065: 2.1701% ( 82) 00:07:49.900 5999.065 - 6024.271: 2.8336% ( 107) 00:07:49.900 6024.271 - 6049.477: 3.3730% ( 87) 00:07:49.900 6049.477 - 6074.683: 4.0365% ( 107) 00:07:49.900 6074.683 - 6099.889: 4.7805% ( 120) 00:07:49.900 6099.889 - 6125.095: 5.4688% ( 111) 00:07:49.900 6125.095 - 6150.302: 6.2748% ( 130) 00:07:49.900 6150.302 - 6175.508: 6.9630% ( 111) 00:07:49.900 6175.508 - 6200.714: 7.8559% ( 144) 00:07:49.900 6200.714 - 6225.920: 8.6806% ( 133) 00:07:49.900 6225.920 - 6251.126: 9.5052% ( 133) 00:07:49.900 6251.126 - 6276.332: 10.3671% ( 139) 00:07:49.900 6276.332 - 6301.538: 11.1855% ( 132) 00:07:49.900 6301.538 - 6326.745: 12.1280% ( 152) 00:07:49.900 6326.745 - 6351.951: 13.0146% ( 143) 00:07:49.900 6351.951 - 6377.157: 13.8703% ( 138) 00:07:49.900 6377.157 - 6402.363: 14.8065% ( 151) 00:07:49.900 6402.363 - 6427.569: 15.6250% ( 132) 00:07:49.900 6427.569 - 6452.775: 16.5055% ( 142) 00:07:49.900 6452.775 - 6503.188: 18.4028% ( 306) 00:07:49.900 6503.188 - 6553.600: 20.1947% ( 289) 00:07:49.900 6553.600 - 6604.012: 22.0610% ( 301) 00:07:49.900 6604.012 - 6654.425: 23.9149% ( 299) 00:07:49.900 6654.425 - 6704.837: 25.7750% ( 300) 00:07:49.900 6704.837 - 6755.249: 27.5856% ( 292) 00:07:49.900 6755.249 - 6805.662: 29.4023% ( 293) 00:07:49.900 6805.662 - 6856.074: 31.1880% ( 288) 00:07:49.900 6856.074 - 6906.486: 32.8125% ( 262) 00:07:49.900 6906.486 - 6956.898: 34.1704% ( 219) 00:07:49.900 6956.898 - 7007.311: 35.2989% ( 182) 00:07:49.900 7007.311 - 7057.723: 36.1731% ( 141) 00:07:49.900 7057.723 - 7108.135: 36.8242% ( 105) 00:07:49.900 7108.135 - 7158.548: 37.3822% ( 90) 00:07:49.900 7158.548 - 7208.960: 37.8162% ( 70) 00:07:49.900 7208.960 - 7259.372: 38.2006% ( 62) 00:07:49.900 7259.372 - 7309.785: 38.5913% ( 63) 00:07:49.900 7309.785 - 7360.197: 38.9199% ( 53) 00:07:49.900 7360.197 - 7410.609: 39.2113% ( 47) 00:07:49.900 7410.609 - 7461.022: 39.5647% ( 57) 00:07:49.900 7461.022 - 7511.434: 40.0422% ( 77) 00:07:49.900 7511.434 - 7561.846: 40.4638% ( 68) 00:07:49.900 7561.846 - 7612.258: 40.9598% ( 80) 00:07:49.900 7612.258 - 7662.671: 41.6729% ( 115) 00:07:49.900 7662.671 - 7713.083: 42.5285% ( 138) 00:07:49.900 7713.083 - 7763.495: 43.4710% ( 152) 00:07:49.900 7763.495 - 7813.908: 44.6863% ( 196) 00:07:49.900 7813.908 - 7864.320: 45.9015% ( 196) 00:07:49.900 7864.320 - 7914.732: 47.4454% ( 249) 00:07:49.900 7914.732 - 7965.145: 48.9459% ( 242) 00:07:49.900 7965.145 - 8015.557: 50.4898% ( 249) 00:07:49.900 8015.557 - 8065.969: 52.2879% ( 290) 00:07:49.900 8065.969 - 8116.382: 54.3713% ( 336) 00:07:49.900 8116.382 - 8166.794: 56.2748% ( 307) 00:07:49.900 8166.794 - 8217.206: 58.4635% ( 353) 00:07:49.900 8217.206 - 8267.618: 60.6709% ( 356) 00:07:49.900 8267.618 - 8318.031: 62.9402% ( 366) 00:07:49.900 8318.031 - 8368.443: 65.2158% ( 367) 00:07:49.900 8368.443 - 8418.855: 67.4851% ( 366) 00:07:49.900 8418.855 - 8469.268: 69.7793% ( 370) 00:07:49.900 8469.268 - 8519.680: 72.0114% ( 360) 00:07:49.900 8519.680 - 8570.092: 74.2312% ( 358) 00:07:49.900 8570.092 - 8620.505: 76.3827% ( 347) 00:07:49.900 8620.505 - 8670.917: 78.4908% ( 340) 00:07:49.900 8670.917 - 8721.329: 80.2827% ( 289) 00:07:49.900 8721.329 - 8771.742: 82.0685% ( 288) 00:07:49.900 8771.742 - 8822.154: 83.6558% ( 256) 00:07:49.900 8822.154 - 8872.566: 85.1749% ( 245) 00:07:49.900 8872.566 - 8922.978: 86.4645% ( 208) 00:07:49.900 8922.978 - 8973.391: 87.5558% ( 176) 00:07:49.900 8973.391 - 9023.803: 88.6223% ( 172) 00:07:49.900 9023.803 - 9074.215: 89.4345% ( 131) 00:07:49.900 9074.215 - 9124.628: 90.1538% ( 116) 00:07:49.900 9124.628 - 9175.040: 90.8420% ( 111) 00:07:49.900 9175.040 - 9225.452: 91.4000% ( 90) 00:07:49.900 9225.452 - 9275.865: 91.8899% ( 79) 00:07:49.900 9275.865 - 9326.277: 92.4293% ( 87) 00:07:49.900 9326.277 - 9376.689: 92.9315% ( 81) 00:07:49.900 9376.689 - 9427.102: 93.2416% ( 50) 00:07:49.900 9427.102 - 9477.514: 93.5764% ( 54) 00:07:49.900 9477.514 - 9527.926: 93.8306% ( 41) 00:07:49.900 9527.926 - 9578.338: 94.1096% ( 45) 00:07:49.900 9578.338 - 9628.751: 94.3266% ( 35) 00:07:49.900 9628.751 - 9679.163: 94.5623% ( 38) 00:07:49.900 9679.163 - 9729.575: 94.8103% ( 40) 00:07:49.900 9729.575 - 9779.988: 95.0955% ( 46) 00:07:49.900 9779.988 - 9830.400: 95.2877% ( 31) 00:07:49.900 9830.400 - 9880.812: 95.4799% ( 31) 00:07:49.900 9880.812 - 9931.225: 95.6473% ( 27) 00:07:49.900 9931.225 - 9981.637: 95.8085% ( 26) 00:07:49.900 9981.637 - 10032.049: 95.9449% ( 22) 00:07:49.900 10032.049 - 10082.462: 96.0565% ( 18) 00:07:49.900 10082.462 - 10132.874: 96.1992% ( 23) 00:07:49.900 10132.874 - 10183.286: 96.3232% ( 20) 00:07:49.900 10183.286 - 10233.698: 96.4224% ( 16) 00:07:49.900 10233.698 - 10284.111: 96.5588% ( 22) 00:07:49.900 10284.111 - 10334.523: 96.6704% ( 18) 00:07:49.900 10334.523 - 10384.935: 96.7758% ( 17) 00:07:49.900 10384.935 - 10435.348: 96.8750% ( 16) 00:07:49.900 10435.348 - 10485.760: 96.9618% ( 14) 00:07:49.900 10485.760 - 10536.172: 97.0610% ( 16) 00:07:49.900 10536.172 - 10586.585: 97.1292% ( 11) 00:07:49.900 10586.585 - 10636.997: 97.2284% ( 16) 00:07:49.900 10636.997 - 10687.409: 97.3028% ( 12) 00:07:49.900 10687.409 - 10737.822: 97.3834% ( 13) 00:07:49.900 10737.822 - 10788.234: 97.4392% ( 9) 00:07:49.900 10788.234 - 10838.646: 97.5074% ( 11) 00:07:49.900 10838.646 - 10889.058: 97.5694% ( 10) 00:07:49.900 10889.058 - 10939.471: 97.6562% ( 14) 00:07:49.900 10939.471 - 10989.883: 97.7245% ( 11) 00:07:49.900 10989.883 - 11040.295: 97.7927% ( 11) 00:07:49.900 11040.295 - 11090.708: 97.8733% ( 13) 00:07:49.900 11090.708 - 11141.120: 97.9353% ( 10) 00:07:49.900 11141.120 - 11191.532: 98.0159% ( 13) 00:07:49.900 11191.532 - 11241.945: 98.0655% ( 8) 00:07:49.900 11241.945 - 11292.357: 98.1337% ( 11) 00:07:49.900 11292.357 - 11342.769: 98.1895% ( 9) 00:07:49.900 11342.769 - 11393.182: 98.2639% ( 12) 00:07:49.900 11393.182 - 11443.594: 98.3197% ( 9) 00:07:49.900 11443.594 - 11494.006: 98.3817% ( 10) 00:07:49.900 11494.006 - 11544.418: 98.4189% ( 6) 00:07:49.900 11544.418 - 11594.831: 98.4685% ( 8) 00:07:49.900 11594.831 - 11645.243: 98.5305% ( 10) 00:07:49.900 11645.243 - 11695.655: 98.5615% ( 5) 00:07:49.900 11695.655 - 11746.068: 98.6173% ( 9) 00:07:49.900 11746.068 - 11796.480: 98.6483% ( 5) 00:07:49.900 11796.480 - 11846.892: 98.6979% ( 8) 00:07:49.900 11846.892 - 11897.305: 98.7413% ( 7) 00:07:49.900 11897.305 - 11947.717: 98.7785% ( 6) 00:07:49.900 11947.717 - 11998.129: 98.8033% ( 4) 00:07:49.900 11998.129 - 12048.542: 98.8343% ( 5) 00:07:49.900 12048.542 - 12098.954: 98.8467% ( 2) 00:07:49.900 12098.954 - 12149.366: 98.8839% ( 6) 00:07:49.900 12149.366 - 12199.778: 98.9087% ( 4) 00:07:49.900 12199.778 - 12250.191: 98.9273% ( 3) 00:07:49.900 12250.191 - 12300.603: 98.9397% ( 2) 00:07:49.900 12300.603 - 12351.015: 98.9769% ( 6) 00:07:49.900 12351.015 - 12401.428: 98.9893% ( 2) 00:07:49.900 12401.428 - 12451.840: 99.0203% ( 5) 00:07:49.900 12451.840 - 12502.252: 99.0327% ( 2) 00:07:49.900 12502.252 - 12552.665: 99.0451% ( 2) 00:07:49.900 12552.665 - 12603.077: 99.0513% ( 1) 00:07:49.900 12603.077 - 12653.489: 99.0699% ( 3) 00:07:49.900 12653.489 - 12703.902: 99.0761% ( 1) 00:07:49.900 12703.902 - 12754.314: 99.0947% ( 3) 00:07:49.900 12754.314 - 12804.726: 99.1009% ( 1) 00:07:49.900 12804.726 - 12855.138: 99.1133% ( 2) 00:07:49.900 12855.138 - 12905.551: 99.1195% ( 1) 00:07:49.900 12905.551 - 13006.375: 99.1381% ( 3) 00:07:49.900 13006.375 - 13107.200: 99.1567% ( 3) 00:07:49.900 13107.200 - 13208.025: 99.1815% ( 4) 00:07:49.900 13208.025 - 13308.849: 99.2063% ( 4) 00:07:49.900 24097.083 - 24197.908: 99.2312% ( 4) 00:07:49.900 24197.908 - 24298.732: 99.2560% ( 4) 00:07:49.900 24298.732 - 24399.557: 99.2808% ( 4) 00:07:49.900 24399.557 - 24500.382: 99.3056% ( 4) 00:07:49.900 24500.382 - 24601.206: 99.3242% ( 3) 00:07:49.900 24601.206 - 24702.031: 99.3490% ( 4) 00:07:49.900 24702.031 - 24802.855: 99.3800% ( 5) 00:07:49.900 24802.855 - 24903.680: 99.3924% ( 2) 00:07:49.900 24903.680 - 25004.505: 99.4172% ( 4) 00:07:49.900 25004.505 - 25105.329: 99.4420% ( 4) 00:07:49.900 25105.329 - 25206.154: 99.4668% ( 4) 00:07:49.900 25206.154 - 25306.978: 99.4916% ( 4) 00:07:49.900 25306.978 - 25407.803: 99.5164% ( 4) 00:07:49.900 25407.803 - 25508.628: 99.5350% ( 3) 00:07:49.900 25508.628 - 25609.452: 99.5660% ( 5) 00:07:49.900 25609.452 - 25710.277: 99.5908% ( 4) 00:07:49.900 25710.277 - 25811.102: 99.6032% ( 2) 00:07:49.900 29642.437 - 29844.086: 99.6156% ( 2) 00:07:49.900 29844.086 - 30045.735: 99.6590% ( 7) 00:07:49.900 30045.735 - 30247.385: 99.6962% ( 6) 00:07:49.900 30247.385 - 30449.034: 99.7520% ( 9) 00:07:49.900 30449.034 - 30650.683: 99.7954% ( 7) 00:07:49.900 30650.683 - 30852.332: 99.8512% ( 9) 00:07:49.900 30852.332 - 31053.982: 99.8946% ( 7) 00:07:49.900 31053.982 - 31255.631: 99.9442% ( 8) 00:07:49.900 31255.631 - 31457.280: 99.9938% ( 8) 00:07:49.900 31457.280 - 31658.929: 100.0000% ( 1) 00:07:49.900 00:07:49.900 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:49.900 ============================================================================== 00:07:49.900 Range in us Cumulative IO count 00:07:49.901 5797.415 - 5822.622: 0.0248% ( 4) 00:07:49.901 5822.622 - 5847.828: 0.0558% ( 5) 00:07:49.901 5847.828 - 5873.034: 0.0744% ( 3) 00:07:49.901 5873.034 - 5898.240: 0.1364% ( 10) 00:07:49.901 5898.240 - 5923.446: 0.3224% ( 30) 00:07:49.901 5923.446 - 5948.652: 0.5518% ( 37) 00:07:49.901 5948.652 - 5973.858: 0.7564% ( 33) 00:07:49.901 5973.858 - 5999.065: 1.0665% ( 50) 00:07:49.901 5999.065 - 6024.271: 1.3207% ( 41) 00:07:49.901 6024.271 - 6049.477: 1.8415% ( 84) 00:07:49.901 6049.477 - 6074.683: 2.4802% ( 103) 00:07:49.901 6074.683 - 6099.889: 3.1126% ( 102) 00:07:49.901 6099.889 - 6125.095: 3.8628% ( 121) 00:07:49.901 6125.095 - 6150.302: 4.7061% ( 136) 00:07:49.901 6150.302 - 6175.508: 5.4998% ( 128) 00:07:49.901 6175.508 - 6200.714: 6.3430% ( 136) 00:07:49.901 6200.714 - 6225.920: 7.2855% ( 152) 00:07:49.901 6225.920 - 6251.126: 8.1721% ( 143) 00:07:49.901 6251.126 - 6276.332: 9.0898% ( 148) 00:07:49.901 6276.332 - 6301.538: 10.0756% ( 159) 00:07:49.901 6301.538 - 6326.745: 11.1483% ( 173) 00:07:49.901 6326.745 - 6351.951: 12.1156% ( 156) 00:07:49.901 6351.951 - 6377.157: 13.1572% ( 168) 00:07:49.901 6377.157 - 6402.363: 14.2671% ( 179) 00:07:49.901 6402.363 - 6427.569: 15.3398% ( 173) 00:07:49.901 6427.569 - 6452.775: 16.3876% ( 169) 00:07:49.901 6452.775 - 6503.188: 18.5020% ( 341) 00:07:49.901 6503.188 - 6553.600: 20.7713% ( 366) 00:07:49.901 6553.600 - 6604.012: 22.9849% ( 357) 00:07:49.901 6604.012 - 6654.425: 25.1178% ( 344) 00:07:49.901 6654.425 - 6704.837: 27.1763% ( 332) 00:07:49.901 6704.837 - 6755.249: 29.1605% ( 320) 00:07:49.901 6755.249 - 6805.662: 31.1322% ( 318) 00:07:49.901 6805.662 - 6856.074: 32.8497% ( 277) 00:07:49.901 6856.074 - 6906.486: 34.2324% ( 223) 00:07:49.901 6906.486 - 6956.898: 35.2989% ( 172) 00:07:49.901 6956.898 - 7007.311: 36.1173% ( 132) 00:07:49.901 7007.311 - 7057.723: 36.7994% ( 110) 00:07:49.901 7057.723 - 7108.135: 37.3450% ( 88) 00:07:49.901 7108.135 - 7158.548: 37.8038% ( 74) 00:07:49.901 7158.548 - 7208.960: 38.1882% ( 62) 00:07:49.901 7208.960 - 7259.372: 38.4859% ( 48) 00:07:49.901 7259.372 - 7309.785: 38.7649% ( 45) 00:07:49.901 7309.785 - 7360.197: 39.1121% ( 56) 00:07:49.901 7360.197 - 7410.609: 39.3973% ( 46) 00:07:49.901 7410.609 - 7461.022: 39.6949% ( 48) 00:07:49.901 7461.022 - 7511.434: 40.0980% ( 65) 00:07:49.901 7511.434 - 7561.846: 40.4700% ( 60) 00:07:49.901 7561.846 - 7612.258: 40.9040% ( 70) 00:07:49.901 7612.258 - 7662.671: 41.3566% ( 73) 00:07:49.901 7662.671 - 7713.083: 41.7907% ( 70) 00:07:49.901 7713.083 - 7763.495: 42.5595% ( 124) 00:07:49.901 7763.495 - 7813.908: 43.5454% ( 159) 00:07:49.901 7813.908 - 7864.320: 44.6305% ( 175) 00:07:49.901 7864.320 - 7914.732: 45.9263% ( 209) 00:07:49.901 7914.732 - 7965.145: 47.3462% ( 229) 00:07:49.901 7965.145 - 8015.557: 49.0265% ( 271) 00:07:49.901 8015.557 - 8065.969: 51.0417% ( 325) 00:07:49.901 8065.969 - 8116.382: 53.0444% ( 323) 00:07:49.901 8116.382 - 8166.794: 55.2145% ( 350) 00:07:49.901 8166.794 - 8217.206: 57.5831% ( 382) 00:07:49.901 8217.206 - 8267.618: 59.8524% ( 366) 00:07:49.901 8267.618 - 8318.031: 62.2768% ( 391) 00:07:49.901 8318.031 - 8368.443: 64.7445% ( 398) 00:07:49.901 8368.443 - 8418.855: 67.2743% ( 408) 00:07:49.901 8418.855 - 8469.268: 70.0025% ( 440) 00:07:49.901 8469.268 - 8519.680: 72.5260% ( 407) 00:07:49.901 8519.680 - 8570.092: 74.8822% ( 380) 00:07:49.901 8570.092 - 8620.505: 77.2197% ( 377) 00:07:49.901 8620.505 - 8670.917: 79.3093% ( 337) 00:07:49.901 8670.917 - 8721.329: 81.0888% ( 287) 00:07:49.901 8721.329 - 8771.742: 82.8187% ( 279) 00:07:49.901 8771.742 - 8822.154: 84.2448% ( 230) 00:07:49.901 8822.154 - 8872.566: 85.5841% ( 216) 00:07:49.901 8872.566 - 8922.978: 86.7932% ( 195) 00:07:49.901 8922.978 - 8973.391: 87.9216% ( 182) 00:07:49.901 8973.391 - 9023.803: 88.8951% ( 157) 00:07:49.901 9023.803 - 9074.215: 89.7569% ( 139) 00:07:49.901 9074.215 - 9124.628: 90.5072% ( 121) 00:07:49.901 9124.628 - 9175.040: 91.1272% ( 100) 00:07:49.901 9175.040 - 9225.452: 91.6481% ( 84) 00:07:49.901 9225.452 - 9275.865: 92.1379% ( 79) 00:07:49.901 9275.865 - 9326.277: 92.5347% ( 64) 00:07:49.901 9326.277 - 9376.689: 92.9129% ( 61) 00:07:49.901 9376.689 - 9427.102: 93.2292% ( 51) 00:07:49.901 9427.102 - 9477.514: 93.4958% ( 43) 00:07:49.901 9477.514 - 9527.926: 93.7686% ( 44) 00:07:49.901 9527.926 - 9578.338: 94.0910% ( 52) 00:07:49.901 9578.338 - 9628.751: 94.3514% ( 42) 00:07:49.901 9628.751 - 9679.163: 94.5809% ( 37) 00:07:49.901 9679.163 - 9729.575: 94.8041% ( 36) 00:07:49.901 9729.575 - 9779.988: 95.0025% ( 32) 00:07:49.901 9779.988 - 9830.400: 95.1699% ( 27) 00:07:49.901 9830.400 - 9880.812: 95.3435% ( 28) 00:07:49.901 9880.812 - 9931.225: 95.5419% ( 32) 00:07:49.901 9931.225 - 9981.637: 95.7155% ( 28) 00:07:49.901 9981.637 - 10032.049: 95.8891% ( 28) 00:07:49.901 10032.049 - 10082.462: 96.0503% ( 26) 00:07:49.901 10082.462 - 10132.874: 96.2054% ( 25) 00:07:49.901 10132.874 - 10183.286: 96.3170% ( 18) 00:07:49.901 10183.286 - 10233.698: 96.4472% ( 21) 00:07:49.901 10233.698 - 10284.111: 96.5464% ( 16) 00:07:49.901 10284.111 - 10334.523: 96.6456% ( 16) 00:07:49.901 10334.523 - 10384.935: 96.7696% ( 20) 00:07:49.901 10384.935 - 10435.348: 96.8626% ( 15) 00:07:49.901 10435.348 - 10485.760: 96.9556% ( 15) 00:07:49.901 10485.760 - 10536.172: 97.0548% ( 16) 00:07:49.901 10536.172 - 10586.585: 97.1540% ( 16) 00:07:49.901 10586.585 - 10636.997: 97.2470% ( 15) 00:07:49.901 10636.997 - 10687.409: 97.3400% ( 15) 00:07:49.901 10687.409 - 10737.822: 97.4144% ( 12) 00:07:49.901 10737.822 - 10788.234: 97.4950% ( 13) 00:07:49.901 10788.234 - 10838.646: 97.5756% ( 13) 00:07:49.901 10838.646 - 10889.058: 97.6562% ( 13) 00:07:49.901 10889.058 - 10939.471: 97.7369% ( 13) 00:07:49.901 10939.471 - 10989.883: 97.8113% ( 12) 00:07:49.901 10989.883 - 11040.295: 97.8733% ( 10) 00:07:49.901 11040.295 - 11090.708: 97.9415% ( 11) 00:07:49.901 11090.708 - 11141.120: 98.0035% ( 10) 00:07:49.901 11141.120 - 11191.532: 98.0655% ( 10) 00:07:49.901 11191.532 - 11241.945: 98.1771% ( 18) 00:07:49.901 11241.945 - 11292.357: 98.2267% ( 8) 00:07:49.901 11292.357 - 11342.769: 98.2701% ( 7) 00:07:49.901 11342.769 - 11393.182: 98.2949% ( 4) 00:07:49.901 11393.182 - 11443.594: 98.3383% ( 7) 00:07:49.901 11443.594 - 11494.006: 98.3879% ( 8) 00:07:49.901 11494.006 - 11544.418: 98.4313% ( 7) 00:07:49.901 11544.418 - 11594.831: 98.5119% ( 13) 00:07:49.901 11594.831 - 11645.243: 98.5491% ( 6) 00:07:49.901 11645.243 - 11695.655: 98.6049% ( 9) 00:07:49.901 11695.655 - 11746.068: 98.6545% ( 8) 00:07:49.901 11746.068 - 11796.480: 98.6855% ( 5) 00:07:49.901 11796.480 - 11846.892: 98.7351% ( 8) 00:07:49.901 11846.892 - 11897.305: 98.7785% ( 7) 00:07:49.901 11897.305 - 11947.717: 98.8095% ( 5) 00:07:49.901 11947.717 - 11998.129: 98.8467% ( 6) 00:07:49.901 11998.129 - 12048.542: 98.8715% ( 4) 00:07:49.901 12048.542 - 12098.954: 98.9025% ( 5) 00:07:49.901 12098.954 - 12149.366: 98.9397% ( 6) 00:07:49.901 12149.366 - 12199.778: 98.9707% ( 5) 00:07:49.901 12199.778 - 12250.191: 99.0017% ( 5) 00:07:49.901 12250.191 - 12300.603: 99.0203% ( 3) 00:07:49.901 12300.603 - 12351.015: 99.0327% ( 2) 00:07:49.901 12351.015 - 12401.428: 99.0451% ( 2) 00:07:49.901 12401.428 - 12451.840: 99.0575% ( 2) 00:07:49.901 12451.840 - 12502.252: 99.0699% ( 2) 00:07:49.901 12502.252 - 12552.665: 99.0823% ( 2) 00:07:49.901 12552.665 - 12603.077: 99.0947% ( 2) 00:07:49.901 12603.077 - 12653.489: 99.1071% ( 2) 00:07:49.901 12653.489 - 12703.902: 99.1195% ( 2) 00:07:49.901 12703.902 - 12754.314: 99.1257% ( 1) 00:07:49.901 12804.726 - 12855.138: 99.1381% ( 2) 00:07:49.901 12855.138 - 12905.551: 99.1505% ( 2) 00:07:49.901 12905.551 - 13006.375: 99.1753% ( 4) 00:07:49.901 13006.375 - 13107.200: 99.2001% ( 4) 00:07:49.901 13107.200 - 13208.025: 99.2063% ( 1) 00:07:49.901 22383.065 - 22483.889: 99.2188% ( 2) 00:07:49.901 22483.889 - 22584.714: 99.2436% ( 4) 00:07:49.901 22584.714 - 22685.538: 99.2684% ( 4) 00:07:49.901 22685.538 - 22786.363: 99.2932% ( 4) 00:07:49.901 22786.363 - 22887.188: 99.3180% ( 4) 00:07:49.901 22887.188 - 22988.012: 99.3428% ( 4) 00:07:49.901 22988.012 - 23088.837: 99.3738% ( 5) 00:07:49.901 23088.837 - 23189.662: 99.3986% ( 4) 00:07:49.901 23189.662 - 23290.486: 99.4234% ( 4) 00:07:49.901 23290.486 - 23391.311: 99.4482% ( 4) 00:07:49.901 23391.311 - 23492.135: 99.4730% ( 4) 00:07:49.901 23492.135 - 23592.960: 99.4978% ( 4) 00:07:49.901 23592.960 - 23693.785: 99.5226% ( 4) 00:07:49.901 23693.785 - 23794.609: 99.5536% ( 5) 00:07:49.901 23794.609 - 23895.434: 99.5784% ( 4) 00:07:49.901 23895.434 - 23996.258: 99.6032% ( 4) 00:07:49.901 28029.243 - 28230.892: 99.6094% ( 1) 00:07:49.901 28230.892 - 28432.542: 99.6590% ( 8) 00:07:49.901 28432.542 - 28634.191: 99.7086% ( 8) 00:07:49.901 28634.191 - 28835.840: 99.7582% ( 8) 00:07:49.901 28835.840 - 29037.489: 99.8140% ( 9) 00:07:49.901 29037.489 - 29239.138: 99.8636% ( 8) 00:07:49.901 29239.138 - 29440.788: 99.9194% ( 9) 00:07:49.901 29440.788 - 29642.437: 99.9690% ( 8) 00:07:49.901 29642.437 - 29844.086: 100.0000% ( 5) 00:07:49.901 00:07:49.901 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:49.901 ============================================================================== 00:07:49.901 Range in us Cumulative IO count 00:07:49.901 5847.828 - 5873.034: 0.0186% ( 3) 00:07:49.901 5873.034 - 5898.240: 0.1736% ( 25) 00:07:49.901 5898.240 - 5923.446: 0.3596% ( 30) 00:07:49.901 5923.446 - 5948.652: 0.5270% ( 27) 00:07:49.901 5948.652 - 5973.858: 0.6944% ( 27) 00:07:49.901 5973.858 - 5999.065: 0.8991% ( 33) 00:07:49.901 5999.065 - 6024.271: 1.1595% ( 42) 00:07:49.901 6024.271 - 6049.477: 1.7857% ( 101) 00:07:49.901 6049.477 - 6074.683: 2.4430% ( 106) 00:07:49.901 6074.683 - 6099.889: 3.1870% ( 120) 00:07:49.901 6099.889 - 6125.095: 3.8442% ( 106) 00:07:49.901 6125.095 - 6150.302: 4.6131% ( 124) 00:07:49.901 6150.302 - 6175.508: 5.5122% ( 145) 00:07:49.901 6175.508 - 6200.714: 6.3492% ( 135) 00:07:49.901 6200.714 - 6225.920: 7.3413% ( 160) 00:07:49.901 6225.920 - 6251.126: 8.2837% ( 152) 00:07:49.901 6251.126 - 6276.332: 9.2510% ( 156) 00:07:49.902 6276.332 - 6301.538: 10.3919% ( 184) 00:07:49.902 6301.538 - 6326.745: 11.4397% ( 169) 00:07:49.902 6326.745 - 6351.951: 12.4442% ( 162) 00:07:49.902 6351.951 - 6377.157: 13.4921% ( 169) 00:07:49.902 6377.157 - 6402.363: 14.4593% ( 156) 00:07:49.902 6402.363 - 6427.569: 15.4948% ( 167) 00:07:49.902 6427.569 - 6452.775: 16.5489% ( 170) 00:07:49.902 6452.775 - 6503.188: 18.6322% ( 336) 00:07:49.902 6503.188 - 6553.600: 20.8147% ( 352) 00:07:49.902 6553.600 - 6604.012: 22.9725% ( 348) 00:07:49.902 6604.012 - 6654.425: 25.1860% ( 357) 00:07:49.902 6654.425 - 6704.837: 27.3872% ( 355) 00:07:49.902 6704.837 - 6755.249: 29.5201% ( 344) 00:07:49.902 6755.249 - 6805.662: 31.6220% ( 339) 00:07:49.902 6805.662 - 6856.074: 33.2837% ( 268) 00:07:49.902 6856.074 - 6906.486: 34.5982% ( 212) 00:07:49.902 6906.486 - 6956.898: 35.6833% ( 175) 00:07:49.902 6956.898 - 7007.311: 36.5017% ( 132) 00:07:49.902 7007.311 - 7057.723: 37.1404% ( 103) 00:07:49.902 7057.723 - 7108.135: 37.6178% ( 77) 00:07:49.902 7108.135 - 7158.548: 38.0332% ( 67) 00:07:49.902 7158.548 - 7208.960: 38.3433% ( 50) 00:07:49.902 7208.960 - 7259.372: 38.6533% ( 50) 00:07:49.902 7259.372 - 7309.785: 38.9509% ( 48) 00:07:49.902 7309.785 - 7360.197: 39.3291% ( 61) 00:07:49.902 7360.197 - 7410.609: 39.6453% ( 51) 00:07:49.902 7410.609 - 7461.022: 39.8624% ( 35) 00:07:49.902 7461.022 - 7511.434: 40.1104% ( 40) 00:07:49.902 7511.434 - 7561.846: 40.4204% ( 50) 00:07:49.902 7561.846 - 7612.258: 40.7490% ( 53) 00:07:49.902 7612.258 - 7662.671: 41.1830% ( 70) 00:07:49.902 7662.671 - 7713.083: 41.7411% ( 90) 00:07:49.902 7713.083 - 7763.495: 42.4975% ( 122) 00:07:49.902 7763.495 - 7813.908: 43.6880% ( 192) 00:07:49.902 7813.908 - 7864.320: 45.0211% ( 215) 00:07:49.902 7864.320 - 7914.732: 46.4162% ( 225) 00:07:49.902 7914.732 - 7965.145: 47.9539% ( 248) 00:07:49.902 7965.145 - 8015.557: 49.6094% ( 267) 00:07:49.902 8015.557 - 8065.969: 51.5129% ( 307) 00:07:49.902 8065.969 - 8116.382: 53.4784% ( 317) 00:07:49.902 8116.382 - 8166.794: 55.4998% ( 326) 00:07:49.902 8166.794 - 8217.206: 57.7877% ( 369) 00:07:49.902 8217.206 - 8267.618: 60.1128% ( 375) 00:07:49.902 8267.618 - 8318.031: 62.6054% ( 402) 00:07:49.902 8318.031 - 8368.443: 65.1786% ( 415) 00:07:49.902 8368.443 - 8418.855: 67.6897% ( 405) 00:07:49.902 8418.855 - 8469.268: 70.2133% ( 407) 00:07:49.902 8469.268 - 8519.680: 72.7369% ( 407) 00:07:49.902 8519.680 - 8570.092: 75.0930% ( 380) 00:07:49.902 8570.092 - 8620.505: 77.2631% ( 350) 00:07:49.902 8620.505 - 8670.917: 79.2597% ( 322) 00:07:49.902 8670.917 - 8721.329: 81.1198% ( 300) 00:07:49.902 8721.329 - 8771.742: 82.8931% ( 286) 00:07:49.902 8771.742 - 8822.154: 84.4742% ( 255) 00:07:49.902 8822.154 - 8872.566: 85.9189% ( 233) 00:07:49.902 8872.566 - 8922.978: 87.2334% ( 212) 00:07:49.902 8922.978 - 8973.391: 88.3743% ( 184) 00:07:49.902 8973.391 - 9023.803: 89.3725% ( 161) 00:07:49.902 9023.803 - 9074.215: 90.2344% ( 139) 00:07:49.902 9074.215 - 9124.628: 90.9722% ( 119) 00:07:49.902 9124.628 - 9175.040: 91.5489% ( 93) 00:07:49.902 9175.040 - 9225.452: 92.0635% ( 83) 00:07:49.902 9225.452 - 9275.865: 92.5409% ( 77) 00:07:49.902 9275.865 - 9326.277: 92.9688% ( 69) 00:07:49.902 9326.277 - 9376.689: 93.3656% ( 64) 00:07:49.902 9376.689 - 9427.102: 93.7190% ( 57) 00:07:49.902 9427.102 - 9477.514: 93.9980% ( 45) 00:07:49.902 9477.514 - 9527.926: 94.2150% ( 35) 00:07:49.902 9527.926 - 9578.338: 94.4072% ( 31) 00:07:49.902 9578.338 - 9628.751: 94.5623% ( 25) 00:07:49.902 9628.751 - 9679.163: 94.7235% ( 26) 00:07:49.902 9679.163 - 9729.575: 94.8971% ( 28) 00:07:49.902 9729.575 - 9779.988: 95.0335% ( 22) 00:07:49.902 9779.988 - 9830.400: 95.1947% ( 26) 00:07:49.902 9830.400 - 9880.812: 95.3001% ( 17) 00:07:49.902 9880.812 - 9931.225: 95.3931% ( 15) 00:07:49.902 9931.225 - 9981.637: 95.5047% ( 18) 00:07:49.902 9981.637 - 10032.049: 95.6039% ( 16) 00:07:49.902 10032.049 - 10082.462: 95.7093% ( 17) 00:07:49.902 10082.462 - 10132.874: 95.8209% ( 18) 00:07:49.902 10132.874 - 10183.286: 95.9139% ( 15) 00:07:49.902 10183.286 - 10233.698: 96.0007% ( 14) 00:07:49.902 10233.698 - 10284.111: 96.0875% ( 14) 00:07:49.902 10284.111 - 10334.523: 96.1992% ( 18) 00:07:49.902 10334.523 - 10384.935: 96.3108% ( 18) 00:07:49.902 10384.935 - 10435.348: 96.4658% ( 25) 00:07:49.902 10435.348 - 10485.760: 96.6084% ( 23) 00:07:49.902 10485.760 - 10536.172: 96.7510% ( 23) 00:07:49.902 10536.172 - 10586.585: 96.8874% ( 22) 00:07:49.902 10586.585 - 10636.997: 97.0114% ( 20) 00:07:49.902 10636.997 - 10687.409: 97.1230% ( 18) 00:07:49.902 10687.409 - 10737.822: 97.2160% ( 15) 00:07:49.902 10737.822 - 10788.234: 97.3338% ( 19) 00:07:49.902 10788.234 - 10838.646: 97.4330% ( 16) 00:07:49.902 10838.646 - 10889.058: 97.5198% ( 14) 00:07:49.902 10889.058 - 10939.471: 97.6128% ( 15) 00:07:49.902 10939.471 - 10989.883: 97.6873% ( 12) 00:07:49.902 10989.883 - 11040.295: 97.7927% ( 17) 00:07:49.902 11040.295 - 11090.708: 97.8671% ( 12) 00:07:49.902 11090.708 - 11141.120: 97.9415% ( 12) 00:07:49.902 11141.120 - 11191.532: 98.0159% ( 12) 00:07:49.902 11191.532 - 11241.945: 98.0903% ( 12) 00:07:49.902 11241.945 - 11292.357: 98.1523% ( 10) 00:07:49.902 11292.357 - 11342.769: 98.2143% ( 10) 00:07:49.902 11342.769 - 11393.182: 98.2639% ( 8) 00:07:49.902 11393.182 - 11443.594: 98.3135% ( 8) 00:07:49.902 11443.594 - 11494.006: 98.3693% ( 9) 00:07:49.902 11494.006 - 11544.418: 98.4065% ( 6) 00:07:49.902 11544.418 - 11594.831: 98.4437% ( 6) 00:07:49.902 11594.831 - 11645.243: 98.4809% ( 6) 00:07:49.902 11645.243 - 11695.655: 98.5243% ( 7) 00:07:49.902 11695.655 - 11746.068: 98.5739% ( 8) 00:07:49.902 11746.068 - 11796.480: 98.6421% ( 11) 00:07:49.902 11796.480 - 11846.892: 98.6855% ( 7) 00:07:49.902 11846.892 - 11897.305: 98.7103% ( 4) 00:07:49.902 11897.305 - 11947.717: 98.7413% ( 5) 00:07:49.902 11947.717 - 11998.129: 98.7599% ( 3) 00:07:49.902 11998.129 - 12048.542: 98.8033% ( 7) 00:07:49.902 12048.542 - 12098.954: 98.8467% ( 7) 00:07:49.902 12098.954 - 12149.366: 98.8777% ( 5) 00:07:49.902 12149.366 - 12199.778: 98.8901% ( 2) 00:07:49.902 12199.778 - 12250.191: 98.9273% ( 6) 00:07:49.902 12250.191 - 12300.603: 98.9583% ( 5) 00:07:49.902 12300.603 - 12351.015: 98.9893% ( 5) 00:07:49.902 12351.015 - 12401.428: 99.0141% ( 4) 00:07:49.902 12401.428 - 12451.840: 99.0451% ( 5) 00:07:49.902 12451.840 - 12502.252: 99.0575% ( 2) 00:07:49.902 12502.252 - 12552.665: 99.0699% ( 2) 00:07:49.902 12552.665 - 12603.077: 99.0885% ( 3) 00:07:49.902 12603.077 - 12653.489: 99.1009% ( 2) 00:07:49.902 12653.489 - 12703.902: 99.1195% ( 3) 00:07:49.902 12703.902 - 12754.314: 99.1319% ( 2) 00:07:49.902 12754.314 - 12804.726: 99.1443% ( 2) 00:07:49.902 12804.726 - 12855.138: 99.1567% ( 2) 00:07:49.902 12855.138 - 12905.551: 99.1691% ( 2) 00:07:49.902 12905.551 - 13006.375: 99.2001% ( 5) 00:07:49.902 13006.375 - 13107.200: 99.2063% ( 1) 00:07:49.902 21374.818 - 21475.643: 99.2188% ( 2) 00:07:49.902 21475.643 - 21576.468: 99.2374% ( 3) 00:07:49.902 21576.468 - 21677.292: 99.2560% ( 3) 00:07:49.902 21677.292 - 21778.117: 99.2746% ( 3) 00:07:49.902 21778.117 - 21878.942: 99.2932% ( 3) 00:07:49.902 21878.942 - 21979.766: 99.3056% ( 2) 00:07:49.902 21979.766 - 22080.591: 99.3242% ( 3) 00:07:49.902 22080.591 - 22181.415: 99.3428% ( 3) 00:07:49.902 22181.415 - 22282.240: 99.3614% ( 3) 00:07:49.902 22282.240 - 22383.065: 99.3800% ( 3) 00:07:49.902 22383.065 - 22483.889: 99.4048% ( 4) 00:07:49.902 22483.889 - 22584.714: 99.4296% ( 4) 00:07:49.902 22584.714 - 22685.538: 99.4606% ( 5) 00:07:49.902 22685.538 - 22786.363: 99.4792% ( 3) 00:07:49.902 22786.363 - 22887.188: 99.4916% ( 2) 00:07:49.902 22887.188 - 22988.012: 99.5226% ( 5) 00:07:49.902 22988.012 - 23088.837: 99.5474% ( 4) 00:07:49.902 23088.837 - 23189.662: 99.5722% ( 4) 00:07:49.902 23189.662 - 23290.486: 99.6032% ( 5) 00:07:49.902 27222.646 - 27424.295: 99.6404% ( 6) 00:07:49.902 27424.295 - 27625.945: 99.6900% ( 8) 00:07:49.902 27625.945 - 27827.594: 99.7396% ( 8) 00:07:49.902 27827.594 - 28029.243: 99.7954% ( 9) 00:07:49.902 28029.243 - 28230.892: 99.8388% ( 7) 00:07:49.902 28230.892 - 28432.542: 99.8946% ( 9) 00:07:49.902 28432.542 - 28634.191: 99.9442% ( 8) 00:07:49.902 28634.191 - 28835.840: 100.0000% ( 9) 00:07:49.902 00:07:49.902 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:49.902 ============================================================================== 00:07:49.902 Range in us Cumulative IO count 00:07:49.902 5822.622 - 5847.828: 0.0185% ( 3) 00:07:49.902 5847.828 - 5873.034: 0.1297% ( 18) 00:07:49.902 5873.034 - 5898.240: 0.2038% ( 12) 00:07:49.902 5898.240 - 5923.446: 0.2779% ( 12) 00:07:49.902 5923.446 - 5948.652: 0.4817% ( 33) 00:07:49.903 5948.652 - 5973.858: 0.6546% ( 28) 00:07:49.903 5973.858 - 5999.065: 0.9264% ( 44) 00:07:49.903 5999.065 - 6024.271: 1.2414% ( 51) 00:07:49.903 6024.271 - 6049.477: 1.8219% ( 94) 00:07:49.903 6049.477 - 6074.683: 2.4642% ( 104) 00:07:49.903 6074.683 - 6099.889: 3.1250% ( 107) 00:07:49.903 6099.889 - 6125.095: 3.7549% ( 102) 00:07:49.903 6125.095 - 6150.302: 4.5208% ( 124) 00:07:49.903 6150.302 - 6175.508: 5.4101% ( 144) 00:07:49.903 6175.508 - 6200.714: 6.3118% ( 146) 00:07:49.903 6200.714 - 6225.920: 7.1270% ( 132) 00:07:49.903 6225.920 - 6251.126: 8.0040% ( 142) 00:07:49.903 6251.126 - 6276.332: 8.9921% ( 160) 00:07:49.903 6276.332 - 6301.538: 10.0729% ( 175) 00:07:49.903 6301.538 - 6326.745: 11.1475% ( 174) 00:07:49.903 6326.745 - 6351.951: 12.1850% ( 168) 00:07:49.903 6351.951 - 6377.157: 13.1855% ( 162) 00:07:49.903 6377.157 - 6402.363: 14.2231% ( 168) 00:07:49.903 6402.363 - 6427.569: 15.2544% ( 167) 00:07:49.903 6427.569 - 6452.775: 16.2117% ( 155) 00:07:49.903 6452.775 - 6503.188: 18.2498% ( 330) 00:07:49.903 6503.188 - 6553.600: 20.3681% ( 343) 00:07:49.903 6553.600 - 6604.012: 22.5358% ( 351) 00:07:49.903 6604.012 - 6654.425: 24.7591% ( 360) 00:07:49.903 6654.425 - 6704.837: 27.0010% ( 363) 00:07:49.903 6704.837 - 6755.249: 29.1008% ( 340) 00:07:49.903 6755.249 - 6805.662: 31.0956% ( 323) 00:07:49.903 6805.662 - 6856.074: 32.8001% ( 276) 00:07:49.903 6856.074 - 6906.486: 34.2453% ( 234) 00:07:49.903 6906.486 - 6956.898: 35.3384% ( 177) 00:07:49.903 6956.898 - 7007.311: 36.2586% ( 149) 00:07:49.903 7007.311 - 7057.723: 36.9195% ( 107) 00:07:49.903 7057.723 - 7108.135: 37.3888% ( 76) 00:07:49.903 7108.135 - 7158.548: 37.7409% ( 57) 00:07:49.903 7158.548 - 7208.960: 38.0682% ( 53) 00:07:49.903 7208.960 - 7259.372: 38.4326% ( 59) 00:07:49.903 7259.372 - 7309.785: 38.7475% ( 51) 00:07:49.903 7309.785 - 7360.197: 39.0501% ( 49) 00:07:49.903 7360.197 - 7410.609: 39.3589% ( 50) 00:07:49.903 7410.609 - 7461.022: 39.5566% ( 32) 00:07:49.903 7461.022 - 7511.434: 39.7480% ( 31) 00:07:49.903 7511.434 - 7561.846: 40.0877% ( 55) 00:07:49.903 7561.846 - 7612.258: 40.4397% ( 57) 00:07:49.903 7612.258 - 7662.671: 40.9461% ( 82) 00:07:49.903 7662.671 - 7713.083: 41.5946% ( 105) 00:07:49.903 7713.083 - 7763.495: 42.4469% ( 138) 00:07:49.903 7763.495 - 7813.908: 43.5338% ( 176) 00:07:49.903 7813.908 - 7864.320: 44.8493% ( 213) 00:07:49.903 7864.320 - 7914.732: 46.1771% ( 215) 00:07:49.903 7914.732 - 7965.145: 47.5605% ( 224) 00:07:49.903 7965.145 - 8015.557: 49.2712% ( 277) 00:07:49.903 8015.557 - 8065.969: 51.1611% ( 306) 00:07:49.903 8065.969 - 8116.382: 53.1621% ( 324) 00:07:49.903 8116.382 - 8166.794: 55.4410% ( 369) 00:07:49.903 8166.794 - 8217.206: 57.7569% ( 375) 00:07:49.903 8217.206 - 8267.618: 60.0914% ( 378) 00:07:49.903 8267.618 - 8318.031: 62.4321% ( 379) 00:07:49.903 8318.031 - 8368.443: 64.9086% ( 401) 00:07:49.903 8368.443 - 8418.855: 67.3975% ( 403) 00:07:49.903 8418.855 - 8469.268: 69.9852% ( 419) 00:07:49.903 8469.268 - 8519.680: 72.3320% ( 380) 00:07:49.903 8519.680 - 8570.092: 74.6727% ( 379) 00:07:49.903 8570.092 - 8620.505: 76.7725% ( 340) 00:07:49.903 8620.505 - 8670.917: 78.6376% ( 302) 00:07:49.903 8670.917 - 8721.329: 80.6448% ( 325) 00:07:49.903 8721.329 - 8771.742: 82.4728% ( 296) 00:07:49.903 8771.742 - 8822.154: 84.1341% ( 269) 00:07:49.903 8822.154 - 8872.566: 85.6287% ( 242) 00:07:49.903 8872.566 - 8922.978: 86.9689% ( 217) 00:07:49.903 8922.978 - 8973.391: 88.1670% ( 194) 00:07:49.903 8973.391 - 9023.803: 89.3219% ( 187) 00:07:49.903 9023.803 - 9074.215: 90.2915% ( 157) 00:07:49.903 9074.215 - 9124.628: 91.1561% ( 140) 00:07:49.903 9124.628 - 9175.040: 91.8355% ( 110) 00:07:49.903 9175.040 - 9225.452: 92.4469% ( 99) 00:07:49.903 9225.452 - 9275.865: 92.9101% ( 75) 00:07:49.903 9275.865 - 9326.277: 93.3300% ( 68) 00:07:49.903 9326.277 - 9376.689: 93.7376% ( 66) 00:07:49.903 9376.689 - 9427.102: 94.0588% ( 52) 00:07:49.903 9427.102 - 9477.514: 94.3429% ( 46) 00:07:49.903 9477.514 - 9527.926: 94.5590% ( 35) 00:07:49.903 9527.926 - 9578.338: 94.7320% ( 28) 00:07:49.903 9578.338 - 9628.751: 94.9111% ( 29) 00:07:49.903 9628.751 - 9679.163: 95.0655% ( 25) 00:07:49.903 9679.163 - 9729.575: 95.1890% ( 20) 00:07:49.903 9729.575 - 9779.988: 95.2816% ( 15) 00:07:49.903 9779.988 - 9830.400: 95.3804% ( 16) 00:07:49.903 9830.400 - 9880.812: 95.4669% ( 14) 00:07:49.903 9880.812 - 9931.225: 95.5966% ( 21) 00:07:49.903 9931.225 - 9981.637: 95.7016% ( 17) 00:07:49.903 9981.637 - 10032.049: 95.8066% ( 17) 00:07:49.903 10032.049 - 10082.462: 95.8992% ( 15) 00:07:49.903 10082.462 - 10132.874: 95.9795% ( 13) 00:07:49.903 10132.874 - 10183.286: 96.0845% ( 17) 00:07:49.903 10183.286 - 10233.698: 96.1833% ( 16) 00:07:49.903 10233.698 - 10284.111: 96.2821% ( 16) 00:07:49.903 10284.111 - 10334.523: 96.3686% ( 14) 00:07:49.903 10334.523 - 10384.935: 96.4365% ( 11) 00:07:49.903 10384.935 - 10435.348: 96.5971% ( 26) 00:07:49.903 10435.348 - 10485.760: 96.6650% ( 11) 00:07:49.903 10485.760 - 10536.172: 96.7515% ( 14) 00:07:49.903 10536.172 - 10586.585: 96.8503% ( 16) 00:07:49.903 10586.585 - 10636.997: 96.9429% ( 15) 00:07:49.903 10636.997 - 10687.409: 97.0356% ( 15) 00:07:49.903 10687.409 - 10737.822: 97.1529% ( 19) 00:07:49.903 10737.822 - 10788.234: 97.2456% ( 15) 00:07:49.903 10788.234 - 10838.646: 97.3258% ( 13) 00:07:49.903 10838.646 - 10889.058: 97.4000% ( 12) 00:07:49.903 10889.058 - 10939.471: 97.4617% ( 10) 00:07:49.903 10939.471 - 10989.883: 97.5235% ( 10) 00:07:49.903 10989.883 - 11040.295: 97.6099% ( 14) 00:07:49.903 11040.295 - 11090.708: 97.6779% ( 11) 00:07:49.903 11090.708 - 11141.120: 97.7396% ( 10) 00:07:49.903 11141.120 - 11191.532: 97.8014% ( 10) 00:07:49.903 11191.532 - 11241.945: 97.8631% ( 10) 00:07:49.903 11241.945 - 11292.357: 97.9373% ( 12) 00:07:49.903 11292.357 - 11342.769: 97.9928% ( 9) 00:07:49.903 11342.769 - 11393.182: 98.0237% ( 5) 00:07:49.903 11393.182 - 11443.594: 98.0793% ( 9) 00:07:49.903 11443.594 - 11494.006: 98.1534% ( 12) 00:07:49.903 11494.006 - 11544.418: 98.2646% ( 18) 00:07:49.903 11544.418 - 11594.831: 98.3325% ( 11) 00:07:49.903 11594.831 - 11645.243: 98.3881% ( 9) 00:07:49.903 11645.243 - 11695.655: 98.4560% ( 11) 00:07:49.903 11695.655 - 11746.068: 98.5240% ( 11) 00:07:49.903 11746.068 - 11796.480: 98.5857% ( 10) 00:07:49.903 11796.480 - 11846.892: 98.6537% ( 11) 00:07:49.903 11846.892 - 11897.305: 98.7339% ( 13) 00:07:49.903 11897.305 - 11947.717: 98.7833% ( 8) 00:07:49.903 11947.717 - 11998.129: 98.8266% ( 7) 00:07:49.903 11998.129 - 12048.542: 98.8698% ( 7) 00:07:49.903 12048.542 - 12098.954: 98.9007% ( 5) 00:07:49.903 12098.954 - 12149.366: 98.9316% ( 5) 00:07:49.903 12149.366 - 12199.778: 98.9563% ( 4) 00:07:49.903 12199.778 - 12250.191: 98.9933% ( 6) 00:07:49.903 12250.191 - 12300.603: 99.0242% ( 5) 00:07:49.903 12300.603 - 12351.015: 99.0489% ( 4) 00:07:49.903 12351.015 - 12401.428: 99.0798% ( 5) 00:07:49.903 12401.428 - 12451.840: 99.1107% ( 5) 00:07:49.903 12451.840 - 12502.252: 99.1416% ( 5) 00:07:49.903 12502.252 - 12552.665: 99.1724% ( 5) 00:07:49.903 12552.665 - 12603.077: 99.1971% ( 4) 00:07:49.903 12603.077 - 12653.489: 99.2095% ( 2) 00:07:49.903 15426.166 - 15526.991: 99.2157% ( 1) 00:07:49.903 15526.991 - 15627.815: 99.2404% ( 4) 00:07:49.903 15627.815 - 15728.640: 99.2651% ( 4) 00:07:49.903 15728.640 - 15829.465: 99.2959% ( 5) 00:07:49.903 15829.465 - 15930.289: 99.3207% ( 4) 00:07:49.903 15930.289 - 16031.114: 99.3454% ( 4) 00:07:49.903 16031.114 - 16131.938: 99.3701% ( 4) 00:07:49.903 16131.938 - 16232.763: 99.4009% ( 5) 00:07:49.903 16232.763 - 16333.588: 99.4256% ( 4) 00:07:49.903 16333.588 - 16434.412: 99.4503% ( 4) 00:07:49.903 16434.412 - 16535.237: 99.4750% ( 4) 00:07:49.903 16535.237 - 16636.062: 99.5059% ( 5) 00:07:49.903 16636.062 - 16736.886: 99.5306% ( 4) 00:07:49.903 16736.886 - 16837.711: 99.5553% ( 4) 00:07:49.903 16837.711 - 16938.535: 99.5800% ( 4) 00:07:49.903 16938.535 - 17039.360: 99.6047% ( 4) 00:07:49.903 20971.520 - 21072.345: 99.6109% ( 1) 00:07:49.903 21072.345 - 21173.169: 99.6356% ( 4) 00:07:49.903 21173.169 - 21273.994: 99.6603% ( 4) 00:07:49.903 21273.994 - 21374.818: 99.6850% ( 4) 00:07:49.903 21374.818 - 21475.643: 99.7159% ( 5) 00:07:49.903 21475.643 - 21576.468: 99.7406% ( 4) 00:07:49.903 21576.468 - 21677.292: 99.7653% ( 4) 00:07:49.903 21677.292 - 21778.117: 99.7900% ( 4) 00:07:49.903 21778.117 - 21878.942: 99.8147% ( 4) 00:07:49.903 21878.942 - 21979.766: 99.8394% ( 4) 00:07:49.903 21979.766 - 22080.591: 99.8641% ( 4) 00:07:49.903 22080.591 - 22181.415: 99.8888% ( 4) 00:07:49.903 22181.415 - 22282.240: 99.9135% ( 4) 00:07:49.903 22282.240 - 22383.065: 99.9382% ( 4) 00:07:49.903 22383.065 - 22483.889: 99.9691% ( 5) 00:07:49.903 22483.889 - 22584.714: 99.9938% ( 4) 00:07:49.903 22584.714 - 22685.538: 100.0000% ( 1) 00:07:49.903 00:07:49.903 13:42:03 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:50.851 Initializing NVMe Controllers 00:07:50.851 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:50.851 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:50.851 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:50.851 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:50.851 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:50.851 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:50.851 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:50.851 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:50.851 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:50.851 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:50.851 Initialization complete. Launching workers. 00:07:50.851 ======================================================== 00:07:50.851 Latency(us) 00:07:50.851 Device Information : IOPS MiB/s Average min max 00:07:50.851 PCIE (0000:00:11.0) NSID 1 from core 0: 14616.11 171.28 8770.35 6466.43 32990.08 00:07:50.851 PCIE (0000:00:13.0) NSID 1 from core 0: 14616.11 171.28 8757.82 6633.95 31530.97 00:07:50.851 PCIE (0000:00:10.0) NSID 1 from core 0: 14616.11 171.28 8743.75 6632.28 30032.41 00:07:50.851 PCIE (0000:00:12.0) NSID 1 from core 0: 14616.11 171.28 8730.09 6691.45 28060.88 00:07:50.851 PCIE (0000:00:12.0) NSID 2 from core 0: 14616.11 171.28 8716.80 6529.82 27108.67 00:07:50.851 PCIE (0000:00:12.0) NSID 3 from core 0: 14679.94 172.03 8665.70 6451.29 21121.20 00:07:50.851 ======================================================== 00:07:50.851 Total : 87760.50 1028.44 8730.70 6451.29 32990.08 00:07:50.851 00:07:50.851 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:50.851 ================================================================================= 00:07:50.851 1.00000% : 7208.960us 00:07:50.851 10.00000% : 7713.083us 00:07:50.851 25.00000% : 7965.145us 00:07:50.851 50.00000% : 8267.618us 00:07:50.851 75.00000% : 8721.329us 00:07:50.851 90.00000% : 10132.874us 00:07:50.851 95.00000% : 12098.954us 00:07:50.851 98.00000% : 14014.622us 00:07:50.851 99.00000% : 15325.342us 00:07:50.851 99.50000% : 27424.295us 00:07:50.851 99.90000% : 32667.175us 00:07:50.851 99.99000% : 33070.474us 00:07:50.851 99.99900% : 33070.474us 00:07:50.851 99.99990% : 33070.474us 00:07:50.851 99.99999% : 33070.474us 00:07:50.851 00:07:50.851 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:50.851 ================================================================================= 00:07:50.851 1.00000% : 7259.372us 00:07:50.851 10.00000% : 7713.083us 00:07:50.851 25.00000% : 7965.145us 00:07:50.851 50.00000% : 8267.618us 00:07:50.851 75.00000% : 8721.329us 00:07:50.851 90.00000% : 10132.874us 00:07:50.851 95.00000% : 12048.542us 00:07:50.851 98.00000% : 14014.622us 00:07:50.851 99.00000% : 15627.815us 00:07:50.851 99.50000% : 26012.751us 00:07:50.851 99.90000% : 31255.631us 00:07:50.851 99.99000% : 31658.929us 00:07:50.851 99.99900% : 31658.929us 00:07:50.851 99.99990% : 31658.929us 00:07:50.851 99.99999% : 31658.929us 00:07:50.851 00:07:50.851 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:50.851 ================================================================================= 00:07:50.851 1.00000% : 7208.960us 00:07:50.851 10.00000% : 7612.258us 00:07:50.851 25.00000% : 7914.732us 00:07:50.851 50.00000% : 8267.618us 00:07:50.851 75.00000% : 8771.742us 00:07:50.851 90.00000% : 10233.698us 00:07:50.851 95.00000% : 11695.655us 00:07:50.851 98.00000% : 14216.271us 00:07:50.851 99.00000% : 15627.815us 00:07:50.851 99.50000% : 24197.908us 00:07:50.851 99.90000% : 29844.086us 00:07:50.852 99.99000% : 30045.735us 00:07:50.852 99.99900% : 30045.735us 00:07:50.852 99.99990% : 30045.735us 00:07:50.852 99.99999% : 30045.735us 00:07:50.852 00:07:50.852 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:50.852 ================================================================================= 00:07:50.852 1.00000% : 7309.785us 00:07:50.852 10.00000% : 7713.083us 00:07:50.852 25.00000% : 7965.145us 00:07:50.852 50.00000% : 8267.618us 00:07:50.852 75.00000% : 8721.329us 00:07:50.852 90.00000% : 10384.935us 00:07:50.852 95.00000% : 11443.594us 00:07:50.852 98.00000% : 14317.095us 00:07:50.852 99.00000% : 15930.289us 00:07:50.852 99.50000% : 22584.714us 00:07:50.852 99.90000% : 27827.594us 00:07:50.852 99.99000% : 28230.892us 00:07:50.852 99.99900% : 28230.892us 00:07:50.852 99.99990% : 28230.892us 00:07:50.852 99.99999% : 28230.892us 00:07:50.852 00:07:50.852 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:50.852 ================================================================================= 00:07:50.852 1.00000% : 7208.960us 00:07:50.852 10.00000% : 7713.083us 00:07:50.852 25.00000% : 7965.145us 00:07:50.852 50.00000% : 8267.618us 00:07:50.852 75.00000% : 8721.329us 00:07:50.852 90.00000% : 10334.523us 00:07:50.852 95.00000% : 11645.243us 00:07:50.852 98.00000% : 14014.622us 00:07:50.852 99.00000% : 15123.692us 00:07:50.852 99.50000% : 21475.643us 00:07:50.852 99.90000% : 26819.348us 00:07:50.852 99.99000% : 27222.646us 00:07:50.852 99.99900% : 27222.646us 00:07:50.852 99.99990% : 27222.646us 00:07:50.852 99.99999% : 27222.646us 00:07:50.852 00:07:50.852 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:50.852 ================================================================================= 00:07:50.852 1.00000% : 7208.960us 00:07:50.852 10.00000% : 7713.083us 00:07:50.852 25.00000% : 7965.145us 00:07:50.852 50.00000% : 8267.618us 00:07:50.852 75.00000% : 8670.917us 00:07:50.852 90.00000% : 10233.698us 00:07:50.852 95.00000% : 11846.892us 00:07:50.852 98.00000% : 13913.797us 00:07:50.852 99.00000% : 14821.218us 00:07:50.852 99.50000% : 15526.991us 00:07:50.852 99.90000% : 20870.695us 00:07:50.852 99.99000% : 21173.169us 00:07:50.852 99.99900% : 21173.169us 00:07:50.852 99.99990% : 21173.169us 00:07:50.852 99.99999% : 21173.169us 00:07:50.852 00:07:50.852 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:50.852 ============================================================================== 00:07:50.852 Range in us Cumulative IO count 00:07:50.852 6452.775 - 6503.188: 0.0068% ( 1) 00:07:50.852 6604.012 - 6654.425: 0.0136% ( 1) 00:07:50.852 6704.837 - 6755.249: 0.0546% ( 6) 00:07:50.852 6755.249 - 6805.662: 0.1160% ( 9) 00:07:50.852 6805.662 - 6856.074: 0.1910% ( 11) 00:07:50.852 6856.074 - 6906.486: 0.3275% ( 20) 00:07:50.852 6906.486 - 6956.898: 0.4026% ( 11) 00:07:50.852 6956.898 - 7007.311: 0.4913% ( 13) 00:07:50.852 7007.311 - 7057.723: 0.6004% ( 16) 00:07:50.852 7057.723 - 7108.135: 0.6823% ( 12) 00:07:50.852 7108.135 - 7158.548: 0.8392% ( 23) 00:07:50.852 7158.548 - 7208.960: 1.0098% ( 25) 00:07:50.852 7208.960 - 7259.372: 1.3442% ( 49) 00:07:50.852 7259.372 - 7309.785: 1.7877% ( 65) 00:07:50.852 7309.785 - 7360.197: 2.2243% ( 64) 00:07:50.852 7360.197 - 7410.609: 2.7020% ( 70) 00:07:50.852 7410.609 - 7461.022: 3.5344% ( 122) 00:07:50.852 7461.022 - 7511.434: 4.5988% ( 156) 00:07:50.852 7511.434 - 7561.846: 5.7792% ( 173) 00:07:50.852 7561.846 - 7612.258: 7.0892% ( 192) 00:07:50.852 7612.258 - 7662.671: 8.8837% ( 263) 00:07:50.852 7662.671 - 7713.083: 10.9511% ( 303) 00:07:50.852 7713.083 - 7763.495: 13.3256% ( 348) 00:07:50.852 7763.495 - 7813.908: 15.9525% ( 385) 00:07:50.852 7813.908 - 7864.320: 19.2686% ( 486) 00:07:50.852 7864.320 - 7914.732: 22.8780% ( 529) 00:07:50.852 7914.732 - 7965.145: 26.7877% ( 573) 00:07:50.852 7965.145 - 8015.557: 30.6564% ( 567) 00:07:50.852 8015.557 - 8065.969: 34.7912% ( 606) 00:07:50.852 8065.969 - 8116.382: 39.4855% ( 688) 00:07:50.852 8116.382 - 8166.794: 44.1730% ( 687) 00:07:50.852 8166.794 - 8217.206: 48.2874% ( 603) 00:07:50.852 8217.206 - 8267.618: 52.4700% ( 613) 00:07:50.852 8267.618 - 8318.031: 56.0248% ( 521) 00:07:50.852 8318.031 - 8368.443: 59.3682% ( 490) 00:07:50.852 8368.443 - 8418.855: 62.3158% ( 432) 00:07:50.852 8418.855 - 8469.268: 65.3316% ( 442) 00:07:50.852 8469.268 - 8519.680: 68.0472% ( 398) 00:07:50.852 8519.680 - 8570.092: 70.4012% ( 345) 00:07:50.852 8570.092 - 8620.505: 72.3390% ( 284) 00:07:50.852 8620.505 - 8670.917: 73.9629% ( 238) 00:07:50.852 8670.917 - 8721.329: 75.6755% ( 251) 00:07:50.852 8721.329 - 8771.742: 77.2107% ( 225) 00:07:50.852 8771.742 - 8822.154: 78.5617% ( 198) 00:07:50.852 8822.154 - 8872.566: 79.8103% ( 183) 00:07:50.852 8872.566 - 8922.978: 81.0044% ( 175) 00:07:50.852 8922.978 - 8973.391: 82.0483% ( 153) 00:07:50.852 8973.391 - 9023.803: 83.0172% ( 142) 00:07:50.852 9023.803 - 9074.215: 84.0884% ( 157) 00:07:50.852 9074.215 - 9124.628: 84.9481% ( 126) 00:07:50.852 9124.628 - 9175.040: 85.7192% ( 113) 00:07:50.852 9175.040 - 9225.452: 86.2309% ( 75) 00:07:50.852 9225.452 - 9275.865: 86.6471% ( 61) 00:07:50.852 9275.865 - 9326.277: 86.9678% ( 47) 00:07:50.852 9326.277 - 9376.689: 87.2680% ( 44) 00:07:50.852 9376.689 - 9427.102: 87.4795% ( 31) 00:07:50.852 9427.102 - 9477.514: 87.6365% ( 23) 00:07:50.852 9477.514 - 9527.926: 87.8275% ( 28) 00:07:50.852 9527.926 - 9578.338: 87.9981% ( 25) 00:07:50.852 9578.338 - 9628.751: 88.1209% ( 18) 00:07:50.852 9628.751 - 9679.163: 88.2574% ( 20) 00:07:50.852 9679.163 - 9729.575: 88.4757% ( 32) 00:07:50.852 9729.575 - 9779.988: 88.5849% ( 16) 00:07:50.852 9779.988 - 9830.400: 88.7418% ( 23) 00:07:50.852 9830.400 - 9880.812: 88.9124% ( 25) 00:07:50.852 9880.812 - 9931.225: 89.0898% ( 26) 00:07:50.852 9931.225 - 9981.637: 89.3081% ( 32) 00:07:50.852 9981.637 - 10032.049: 89.5469% ( 35) 00:07:50.852 10032.049 - 10082.462: 89.8199% ( 40) 00:07:50.852 10082.462 - 10132.874: 90.0109% ( 28) 00:07:50.852 10132.874 - 10183.286: 90.2224% ( 31) 00:07:50.852 10183.286 - 10233.698: 90.3930% ( 25) 00:07:50.852 10233.698 - 10284.111: 90.6455% ( 37) 00:07:50.852 10284.111 - 10334.523: 90.8979% ( 37) 00:07:50.852 10334.523 - 10384.935: 91.1436% ( 36) 00:07:50.852 10384.935 - 10435.348: 91.3551% ( 31) 00:07:50.852 10435.348 - 10485.760: 91.6280% ( 40) 00:07:50.852 10485.760 - 10536.172: 91.9009% ( 40) 00:07:50.852 10536.172 - 10586.585: 92.1193% ( 32) 00:07:50.852 10586.585 - 10636.997: 92.2694% ( 22) 00:07:50.852 10636.997 - 10687.409: 92.4263% ( 23) 00:07:50.852 10687.409 - 10737.822: 92.5628% ( 20) 00:07:50.852 10737.822 - 10788.234: 92.7129% ( 22) 00:07:50.852 10788.234 - 10838.646: 92.8493% ( 20) 00:07:50.852 10838.646 - 10889.058: 93.0950% ( 36) 00:07:50.852 10889.058 - 10939.471: 93.2246% ( 19) 00:07:50.852 10939.471 - 10989.883: 93.3611% ( 20) 00:07:50.852 10989.883 - 11040.295: 93.4498% ( 13) 00:07:50.852 11040.295 - 11090.708: 93.5385% ( 13) 00:07:50.852 11090.708 - 11141.120: 93.6204% ( 12) 00:07:50.852 11141.120 - 11191.532: 93.6818% ( 9) 00:07:50.852 11191.532 - 11241.945: 93.7500% ( 10) 00:07:50.852 11241.945 - 11292.357: 93.9138% ( 24) 00:07:50.852 11292.357 - 11342.769: 94.0025% ( 13) 00:07:50.852 11342.769 - 11393.182: 94.0843% ( 12) 00:07:50.852 11393.182 - 11443.594: 94.1662% ( 12) 00:07:50.852 11443.594 - 11494.006: 94.2481% ( 12) 00:07:50.852 11494.006 - 11544.418: 94.3095% ( 9) 00:07:50.852 11544.418 - 11594.831: 94.3777% ( 10) 00:07:50.852 11594.831 - 11645.243: 94.4528% ( 11) 00:07:50.852 11645.243 - 11695.655: 94.5142% ( 9) 00:07:50.852 11695.655 - 11746.068: 94.5824% ( 10) 00:07:50.852 11746.068 - 11796.480: 94.6370% ( 8) 00:07:50.852 11796.480 - 11846.892: 94.7052% ( 10) 00:07:50.852 11846.892 - 11897.305: 94.7462% ( 6) 00:07:50.852 11897.305 - 11947.717: 94.8076% ( 9) 00:07:50.852 11947.717 - 11998.129: 94.8963% ( 13) 00:07:50.852 11998.129 - 12048.542: 94.9918% ( 14) 00:07:50.852 12048.542 - 12098.954: 95.0600% ( 10) 00:07:50.852 12098.954 - 12149.366: 95.1351% ( 11) 00:07:50.852 12149.366 - 12199.778: 95.2170% ( 12) 00:07:50.852 12199.778 - 12250.191: 95.3125% ( 14) 00:07:50.852 12250.191 - 12300.603: 95.3807% ( 10) 00:07:50.852 12300.603 - 12351.015: 95.4694% ( 13) 00:07:50.852 12351.015 - 12401.428: 95.5581% ( 13) 00:07:50.852 12401.428 - 12451.840: 95.6605% ( 15) 00:07:50.852 12451.840 - 12502.252: 95.7628% ( 15) 00:07:50.852 12502.252 - 12552.665: 95.8720% ( 16) 00:07:50.852 12552.665 - 12603.077: 96.0562% ( 27) 00:07:50.852 12603.077 - 12653.489: 96.1449% ( 13) 00:07:50.852 12653.489 - 12703.902: 96.2268% ( 12) 00:07:50.852 12703.902 - 12754.314: 96.3019% ( 11) 00:07:50.852 12754.314 - 12804.726: 96.3701% ( 10) 00:07:50.852 12804.726 - 12855.138: 96.4451% ( 11) 00:07:50.852 12855.138 - 12905.551: 96.5134% ( 10) 00:07:50.852 12905.551 - 13006.375: 96.6703% ( 23) 00:07:50.852 13006.375 - 13107.200: 96.8477% ( 26) 00:07:50.852 13107.200 - 13208.025: 97.1752% ( 48) 00:07:50.852 13208.025 - 13308.849: 97.3526% ( 26) 00:07:50.852 13308.849 - 13409.674: 97.4209% ( 10) 00:07:50.852 13409.674 - 13510.498: 97.5164% ( 14) 00:07:50.852 13510.498 - 13611.323: 97.5846% ( 10) 00:07:50.852 13611.323 - 13712.148: 97.6870% ( 15) 00:07:50.852 13712.148 - 13812.972: 97.7961% ( 16) 00:07:50.852 13812.972 - 13913.797: 97.9258% ( 19) 00:07:50.852 13913.797 - 14014.622: 98.0281% ( 15) 00:07:50.852 14014.622 - 14115.446: 98.1441% ( 17) 00:07:50.852 14115.446 - 14216.271: 98.2601% ( 17) 00:07:50.852 14216.271 - 14317.095: 98.3693% ( 16) 00:07:50.852 14317.095 - 14417.920: 98.4580% ( 13) 00:07:50.852 14417.920 - 14518.745: 98.5330% ( 11) 00:07:50.852 14518.745 - 14619.569: 98.6081% ( 11) 00:07:50.852 14619.569 - 14720.394: 98.7172% ( 16) 00:07:50.852 14720.394 - 14821.218: 98.8128% ( 14) 00:07:50.852 14821.218 - 14922.043: 98.8947% ( 12) 00:07:50.852 14922.043 - 15022.868: 98.9424% ( 7) 00:07:50.852 15022.868 - 15123.692: 98.9697% ( 4) 00:07:50.852 15123.692 - 15224.517: 98.9902% ( 3) 00:07:50.852 15224.517 - 15325.342: 99.0175% ( 4) 00:07:50.852 15325.342 - 15426.166: 99.0448% ( 4) 00:07:50.852 15426.166 - 15526.991: 99.0652% ( 3) 00:07:50.852 15526.991 - 15627.815: 99.0925% ( 4) 00:07:50.853 15627.815 - 15728.640: 99.1198% ( 4) 00:07:50.853 15728.640 - 15829.465: 99.1266% ( 1) 00:07:50.853 26416.049 - 26617.698: 99.3040% ( 26) 00:07:50.853 26617.698 - 26819.348: 99.3586% ( 8) 00:07:50.853 26819.348 - 27020.997: 99.4132% ( 8) 00:07:50.853 27020.997 - 27222.646: 99.4610% ( 7) 00:07:50.853 27222.646 - 27424.295: 99.5087% ( 7) 00:07:50.853 27424.295 - 27625.945: 99.5633% ( 8) 00:07:50.853 30449.034 - 30650.683: 99.6179% ( 8) 00:07:50.853 30650.683 - 30852.332: 99.6384% ( 3) 00:07:50.853 31658.929 - 31860.578: 99.6930% ( 8) 00:07:50.853 31860.578 - 32062.228: 99.7407% ( 7) 00:07:50.853 32062.228 - 32263.877: 99.7885% ( 7) 00:07:50.853 32263.877 - 32465.526: 99.8431% ( 8) 00:07:50.853 32465.526 - 32667.175: 99.9045% ( 9) 00:07:50.853 32667.175 - 32868.825: 99.9659% ( 9) 00:07:50.853 32868.825 - 33070.474: 100.0000% ( 5) 00:07:50.853 00:07:50.853 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:50.853 ============================================================================== 00:07:50.853 Range in us Cumulative IO count 00:07:50.853 6604.012 - 6654.425: 0.0068% ( 1) 00:07:50.853 6704.837 - 6755.249: 0.0136% ( 1) 00:07:50.853 6755.249 - 6805.662: 0.0341% ( 3) 00:07:50.853 6805.662 - 6856.074: 0.0887% ( 8) 00:07:50.853 6856.074 - 6906.486: 0.4026% ( 46) 00:07:50.853 6906.486 - 6956.898: 0.5049% ( 15) 00:07:50.853 6956.898 - 7007.311: 0.5663% ( 9) 00:07:50.853 7007.311 - 7057.723: 0.6346% ( 10) 00:07:50.853 7057.723 - 7108.135: 0.7710% ( 20) 00:07:50.853 7108.135 - 7158.548: 0.8392% ( 10) 00:07:50.853 7158.548 - 7208.960: 0.9552% ( 17) 00:07:50.853 7208.960 - 7259.372: 1.2828% ( 48) 00:07:50.853 7259.372 - 7309.785: 1.7740% ( 72) 00:07:50.853 7309.785 - 7360.197: 2.2448% ( 69) 00:07:50.853 7360.197 - 7410.609: 2.8248% ( 85) 00:07:50.853 7410.609 - 7461.022: 3.5412% ( 105) 00:07:50.853 7461.022 - 7511.434: 4.5852% ( 153) 00:07:50.853 7511.434 - 7561.846: 5.8201% ( 181) 00:07:50.853 7561.846 - 7612.258: 7.3622% ( 226) 00:07:50.853 7612.258 - 7662.671: 9.2044% ( 270) 00:07:50.853 7662.671 - 7713.083: 11.3059% ( 308) 00:07:50.853 7713.083 - 7763.495: 13.6053% ( 337) 00:07:50.853 7763.495 - 7813.908: 16.5257% ( 428) 00:07:50.853 7813.908 - 7864.320: 19.7189% ( 468) 00:07:50.853 7864.320 - 7914.732: 23.2601% ( 519) 00:07:50.853 7914.732 - 7965.145: 26.6239% ( 493) 00:07:50.853 7965.145 - 8015.557: 30.7587% ( 606) 00:07:50.853 8015.557 - 8065.969: 34.5183% ( 551) 00:07:50.853 8065.969 - 8116.382: 38.9124% ( 644) 00:07:50.853 8116.382 - 8166.794: 43.1291% ( 618) 00:07:50.853 8166.794 - 8217.206: 47.6187% ( 658) 00:07:50.853 8217.206 - 8267.618: 51.8150% ( 615) 00:07:50.853 8267.618 - 8318.031: 55.4517% ( 533) 00:07:50.853 8318.031 - 8368.443: 58.7132% ( 478) 00:07:50.853 8368.443 - 8418.855: 62.3021% ( 526) 00:07:50.853 8418.855 - 8469.268: 65.3316% ( 444) 00:07:50.853 8469.268 - 8519.680: 68.0609% ( 400) 00:07:50.853 8519.680 - 8570.092: 70.8993% ( 416) 00:07:50.853 8570.092 - 8620.505: 73.1305% ( 327) 00:07:50.853 8620.505 - 8670.917: 74.8704% ( 255) 00:07:50.853 8670.917 - 8721.329: 76.7535% ( 276) 00:07:50.853 8721.329 - 8771.742: 78.1728% ( 208) 00:07:50.853 8771.742 - 8822.154: 79.4009% ( 180) 00:07:50.853 8822.154 - 8872.566: 80.4995% ( 161) 00:07:50.853 8872.566 - 8922.978: 81.3387% ( 123) 00:07:50.853 8922.978 - 8973.391: 82.0483% ( 104) 00:07:50.853 8973.391 - 9023.803: 82.8193% ( 113) 00:07:50.853 9023.803 - 9074.215: 83.5972% ( 114) 00:07:50.853 9074.215 - 9124.628: 84.1908% ( 87) 00:07:50.853 9124.628 - 9175.040: 84.7162% ( 77) 00:07:50.853 9175.040 - 9225.452: 85.1597% ( 65) 00:07:50.853 9225.452 - 9275.865: 85.5827% ( 62) 00:07:50.853 9275.865 - 9326.277: 86.1285% ( 80) 00:07:50.853 9326.277 - 9376.689: 86.5993% ( 69) 00:07:50.853 9376.689 - 9427.102: 86.9678% ( 54) 00:07:50.853 9427.102 - 9477.514: 87.2885% ( 47) 00:07:50.853 9477.514 - 9527.926: 87.5409% ( 37) 00:07:50.853 9527.926 - 9578.338: 87.8002% ( 38) 00:07:50.853 9578.338 - 9628.751: 88.0390% ( 35) 00:07:50.853 9628.751 - 9679.163: 88.2369% ( 29) 00:07:50.853 9679.163 - 9729.575: 88.4552% ( 32) 00:07:50.853 9729.575 - 9779.988: 88.7009% ( 36) 00:07:50.853 9779.988 - 9830.400: 88.8851% ( 27) 00:07:50.853 9830.400 - 9880.812: 89.0761% ( 28) 00:07:50.853 9880.812 - 9931.225: 89.2604% ( 27) 00:07:50.853 9931.225 - 9981.637: 89.4514% ( 28) 00:07:50.853 9981.637 - 10032.049: 89.6493% ( 29) 00:07:50.853 10032.049 - 10082.462: 89.9154% ( 39) 00:07:50.853 10082.462 - 10132.874: 90.1815% ( 39) 00:07:50.853 10132.874 - 10183.286: 90.3657% ( 27) 00:07:50.853 10183.286 - 10233.698: 90.5909% ( 33) 00:07:50.853 10233.698 - 10284.111: 90.8024% ( 31) 00:07:50.853 10284.111 - 10334.523: 91.0003% ( 29) 00:07:50.853 10334.523 - 10384.935: 91.2664% ( 39) 00:07:50.853 10384.935 - 10435.348: 91.5120% ( 36) 00:07:50.853 10435.348 - 10485.760: 91.7303% ( 32) 00:07:50.853 10485.760 - 10536.172: 91.9487% ( 32) 00:07:50.853 10536.172 - 10586.585: 92.0988% ( 22) 00:07:50.853 10586.585 - 10636.997: 92.2489% ( 22) 00:07:50.853 10636.997 - 10687.409: 92.3990% ( 22) 00:07:50.853 10687.409 - 10737.822: 92.5696% ( 25) 00:07:50.853 10737.822 - 10788.234: 92.7879% ( 32) 00:07:50.853 10788.234 - 10838.646: 92.9380% ( 22) 00:07:50.853 10838.646 - 10889.058: 93.0813% ( 21) 00:07:50.853 10889.058 - 10939.471: 93.2041% ( 18) 00:07:50.853 10939.471 - 10989.883: 93.3338% ( 19) 00:07:50.853 10989.883 - 11040.295: 93.4634% ( 19) 00:07:50.853 11040.295 - 11090.708: 93.6886% ( 33) 00:07:50.853 11090.708 - 11141.120: 93.8046% ( 17) 00:07:50.853 11141.120 - 11191.532: 93.9138% ( 16) 00:07:50.853 11191.532 - 11241.945: 93.9820% ( 10) 00:07:50.853 11241.945 - 11292.357: 94.0229% ( 6) 00:07:50.853 11292.357 - 11342.769: 94.0502% ( 4) 00:07:50.853 11342.769 - 11393.182: 94.0639% ( 2) 00:07:50.853 11393.182 - 11443.594: 94.0980% ( 5) 00:07:50.853 11443.594 - 11494.006: 94.1321% ( 5) 00:07:50.853 11494.006 - 11544.418: 94.1799% ( 7) 00:07:50.853 11544.418 - 11594.831: 94.2413% ( 9) 00:07:50.853 11594.831 - 11645.243: 94.2959% ( 8) 00:07:50.853 11645.243 - 11695.655: 94.3709% ( 11) 00:07:50.853 11695.655 - 11746.068: 94.4460% ( 11) 00:07:50.853 11746.068 - 11796.480: 94.6029% ( 23) 00:07:50.853 11796.480 - 11846.892: 94.6711% ( 10) 00:07:50.853 11846.892 - 11897.305: 94.7394% ( 10) 00:07:50.853 11897.305 - 11947.717: 94.8417% ( 15) 00:07:50.853 11947.717 - 11998.129: 94.9509% ( 16) 00:07:50.853 11998.129 - 12048.542: 95.0328% ( 12) 00:07:50.853 12048.542 - 12098.954: 95.0873% ( 8) 00:07:50.853 12098.954 - 12149.366: 95.1487% ( 9) 00:07:50.853 12149.366 - 12199.778: 95.1897% ( 6) 00:07:50.853 12199.778 - 12250.191: 95.2374% ( 7) 00:07:50.853 12250.191 - 12300.603: 95.3330% ( 14) 00:07:50.853 12300.603 - 12351.015: 95.4012% ( 10) 00:07:50.853 12351.015 - 12401.428: 95.5035% ( 15) 00:07:50.853 12401.428 - 12451.840: 95.6673% ( 24) 00:07:50.853 12451.840 - 12502.252: 95.8515% ( 27) 00:07:50.853 12502.252 - 12552.665: 95.9675% ( 17) 00:07:50.853 12552.665 - 12603.077: 96.1449% ( 26) 00:07:50.853 12603.077 - 12653.489: 96.2950% ( 22) 00:07:50.853 12653.489 - 12703.902: 96.3974% ( 15) 00:07:50.853 12703.902 - 12754.314: 96.4793% ( 12) 00:07:50.853 12754.314 - 12804.726: 96.5953% ( 17) 00:07:50.853 12804.726 - 12855.138: 96.6976% ( 15) 00:07:50.853 12855.138 - 12905.551: 96.7863% ( 13) 00:07:50.853 12905.551 - 13006.375: 96.9910% ( 30) 00:07:50.853 13006.375 - 13107.200: 97.1752% ( 27) 00:07:50.853 13107.200 - 13208.025: 97.3390% ( 24) 00:07:50.853 13208.025 - 13308.849: 97.5505% ( 31) 00:07:50.853 13308.849 - 13409.674: 97.6938% ( 21) 00:07:50.853 13409.674 - 13510.498: 97.8029% ( 16) 00:07:50.853 13510.498 - 13611.323: 97.8848% ( 12) 00:07:50.853 13611.323 - 13712.148: 97.9189% ( 5) 00:07:50.853 13712.148 - 13812.972: 97.9531% ( 5) 00:07:50.853 13812.972 - 13913.797: 97.9872% ( 5) 00:07:50.853 13913.797 - 14014.622: 98.0486% ( 9) 00:07:50.853 14014.622 - 14115.446: 98.0895% ( 6) 00:07:50.853 14115.446 - 14216.271: 98.1509% ( 9) 00:07:50.853 14216.271 - 14317.095: 98.2123% ( 9) 00:07:50.853 14317.095 - 14417.920: 98.3079% ( 14) 00:07:50.853 14417.920 - 14518.745: 98.4034% ( 14) 00:07:50.853 14518.745 - 14619.569: 98.4716% ( 10) 00:07:50.853 14619.569 - 14720.394: 98.5467% ( 11) 00:07:50.853 14720.394 - 14821.218: 98.6081% ( 9) 00:07:50.853 14821.218 - 14922.043: 98.6627% ( 8) 00:07:50.853 14922.043 - 15022.868: 98.7036% ( 6) 00:07:50.853 15022.868 - 15123.692: 98.7582% ( 8) 00:07:50.853 15123.692 - 15224.517: 98.8128% ( 8) 00:07:50.853 15224.517 - 15325.342: 98.8674% ( 8) 00:07:50.853 15325.342 - 15426.166: 98.9219% ( 8) 00:07:50.853 15426.166 - 15526.991: 98.9697% ( 7) 00:07:50.853 15526.991 - 15627.815: 99.0106% ( 6) 00:07:50.853 15627.815 - 15728.640: 99.0516% ( 6) 00:07:50.853 15728.640 - 15829.465: 99.0857% ( 5) 00:07:50.853 15829.465 - 15930.289: 99.1130% ( 4) 00:07:50.853 15930.289 - 16031.114: 99.1266% ( 2) 00:07:50.853 24500.382 - 24601.206: 99.1403% ( 2) 00:07:50.853 24601.206 - 24702.031: 99.1676% ( 4) 00:07:50.853 24702.031 - 24802.855: 99.1949% ( 4) 00:07:50.853 24802.855 - 24903.680: 99.2222% ( 4) 00:07:50.853 24903.680 - 25004.505: 99.2563% ( 5) 00:07:50.853 25004.505 - 25105.329: 99.2836% ( 4) 00:07:50.853 25105.329 - 25206.154: 99.3109% ( 4) 00:07:50.853 25206.154 - 25306.978: 99.3382% ( 4) 00:07:50.853 25306.978 - 25407.803: 99.3654% ( 4) 00:07:50.853 25407.803 - 25508.628: 99.3927% ( 4) 00:07:50.853 25508.628 - 25609.452: 99.4200% ( 4) 00:07:50.853 25609.452 - 25710.277: 99.4473% ( 4) 00:07:50.853 25710.277 - 25811.102: 99.4814% ( 5) 00:07:50.853 25811.102 - 26012.751: 99.5360% ( 8) 00:07:50.853 26012.751 - 26214.400: 99.5633% ( 4) 00:07:50.853 29844.086 - 30045.735: 99.5701% ( 1) 00:07:50.853 30045.735 - 30247.385: 99.6316% ( 9) 00:07:50.853 30247.385 - 30449.034: 99.6793% ( 7) 00:07:50.853 30449.034 - 30650.683: 99.7407% ( 9) 00:07:50.853 30650.683 - 30852.332: 99.7953% ( 8) 00:07:50.853 30852.332 - 31053.982: 99.8567% ( 9) 00:07:50.853 31053.982 - 31255.631: 99.9181% ( 9) 00:07:50.853 31255.631 - 31457.280: 99.9727% ( 8) 00:07:50.853 31457.280 - 31658.929: 100.0000% ( 4) 00:07:50.853 00:07:50.853 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:50.854 ============================================================================== 00:07:50.854 Range in us Cumulative IO count 00:07:50.854 6604.012 - 6654.425: 0.0205% ( 3) 00:07:50.854 6654.425 - 6704.837: 0.0887% ( 10) 00:07:50.854 6704.837 - 6755.249: 0.1501% ( 9) 00:07:50.854 6755.249 - 6805.662: 0.1842% ( 5) 00:07:50.854 6805.662 - 6856.074: 0.3002% ( 17) 00:07:50.854 6856.074 - 6906.486: 0.4094% ( 16) 00:07:50.854 6906.486 - 6956.898: 0.5186% ( 16) 00:07:50.854 6956.898 - 7007.311: 0.5731% ( 8) 00:07:50.854 7007.311 - 7057.723: 0.6823% ( 16) 00:07:50.854 7057.723 - 7108.135: 0.7710% ( 13) 00:07:50.854 7108.135 - 7158.548: 0.9825% ( 31) 00:07:50.854 7158.548 - 7208.960: 1.3919% ( 60) 00:07:50.854 7208.960 - 7259.372: 2.0265% ( 93) 00:07:50.854 7259.372 - 7309.785: 2.6951% ( 98) 00:07:50.854 7309.785 - 7360.197: 3.3638% ( 98) 00:07:50.854 7360.197 - 7410.609: 4.4555% ( 160) 00:07:50.854 7410.609 - 7461.022: 5.7928% ( 196) 00:07:50.854 7461.022 - 7511.434: 6.9460% ( 169) 00:07:50.854 7511.434 - 7561.846: 8.5358% ( 233) 00:07:50.854 7561.846 - 7612.258: 10.3302% ( 263) 00:07:50.854 7612.258 - 7662.671: 12.3704% ( 299) 00:07:50.854 7662.671 - 7713.083: 14.8745% ( 367) 00:07:50.854 7713.083 - 7763.495: 17.6174% ( 402) 00:07:50.854 7763.495 - 7813.908: 20.6127% ( 439) 00:07:50.854 7813.908 - 7864.320: 23.7855% ( 465) 00:07:50.854 7864.320 - 7914.732: 27.0538% ( 479) 00:07:50.854 7914.732 - 7965.145: 30.6905% ( 533) 00:07:50.854 7965.145 - 8015.557: 34.2112% ( 516) 00:07:50.854 8015.557 - 8065.969: 37.8139% ( 528) 00:07:50.854 8065.969 - 8116.382: 41.1777% ( 493) 00:07:50.854 8116.382 - 8166.794: 44.6370% ( 507) 00:07:50.854 8166.794 - 8217.206: 47.8507% ( 471) 00:07:50.854 8217.206 - 8267.618: 51.2145% ( 493) 00:07:50.854 8267.618 - 8318.031: 54.2713% ( 448) 00:07:50.854 8318.031 - 8368.443: 57.2735% ( 440) 00:07:50.854 8368.443 - 8418.855: 60.1665% ( 424) 00:07:50.854 8418.855 - 8469.268: 62.9094% ( 402) 00:07:50.854 8469.268 - 8519.680: 65.5636% ( 389) 00:07:50.854 8519.680 - 8570.092: 68.0063% ( 358) 00:07:50.854 8570.092 - 8620.505: 70.3398% ( 342) 00:07:50.854 8620.505 - 8670.917: 72.3731% ( 298) 00:07:50.854 8670.917 - 8721.329: 74.1676% ( 263) 00:07:50.854 8721.329 - 8771.742: 75.9007% ( 254) 00:07:50.854 8771.742 - 8822.154: 77.4359% ( 225) 00:07:50.854 8822.154 - 8872.566: 78.6845% ( 183) 00:07:50.854 8872.566 - 8922.978: 79.8513% ( 171) 00:07:50.854 8922.978 - 8973.391: 80.8543% ( 147) 00:07:50.854 8973.391 - 9023.803: 81.5434% ( 101) 00:07:50.854 9023.803 - 9074.215: 82.2325% ( 101) 00:07:50.854 9074.215 - 9124.628: 82.9353% ( 103) 00:07:50.854 9124.628 - 9175.040: 83.6517% ( 105) 00:07:50.854 9175.040 - 9225.452: 84.2249% ( 84) 00:07:50.854 9225.452 - 9275.865: 84.8458% ( 91) 00:07:50.854 9275.865 - 9326.277: 85.3439% ( 73) 00:07:50.854 9326.277 - 9376.689: 85.6714% ( 48) 00:07:50.854 9376.689 - 9427.102: 85.9784% ( 45) 00:07:50.854 9427.102 - 9477.514: 86.2650% ( 42) 00:07:50.854 9477.514 - 9527.926: 86.5448% ( 41) 00:07:50.854 9527.926 - 9578.338: 86.7631% ( 32) 00:07:50.854 9578.338 - 9628.751: 86.9678% ( 30) 00:07:50.854 9628.751 - 9679.163: 87.1998% ( 34) 00:07:50.854 9679.163 - 9729.575: 87.4318% ( 34) 00:07:50.854 9729.575 - 9779.988: 87.5887% ( 23) 00:07:50.854 9779.988 - 9830.400: 87.9026% ( 46) 00:07:50.854 9830.400 - 9880.812: 88.0868% ( 27) 00:07:50.854 9880.812 - 9931.225: 88.3734% ( 42) 00:07:50.854 9931.225 - 9981.637: 88.6668% ( 43) 00:07:50.854 9981.637 - 10032.049: 89.0079% ( 50) 00:07:50.854 10032.049 - 10082.462: 89.3013% ( 43) 00:07:50.854 10082.462 - 10132.874: 89.5265% ( 33) 00:07:50.854 10132.874 - 10183.286: 89.7448% ( 32) 00:07:50.854 10183.286 - 10233.698: 90.0246% ( 41) 00:07:50.854 10233.698 - 10284.111: 90.2702% ( 36) 00:07:50.854 10284.111 - 10334.523: 90.5090% ( 35) 00:07:50.854 10334.523 - 10384.935: 90.6932% ( 27) 00:07:50.854 10384.935 - 10435.348: 90.9457% ( 37) 00:07:50.854 10435.348 - 10485.760: 91.1299% ( 27) 00:07:50.854 10485.760 - 10536.172: 91.3141% ( 27) 00:07:50.854 10536.172 - 10586.585: 91.5052% ( 28) 00:07:50.854 10586.585 - 10636.997: 91.7099% ( 30) 00:07:50.854 10636.997 - 10687.409: 91.8941% ( 27) 00:07:50.854 10687.409 - 10737.822: 92.1056% ( 31) 00:07:50.854 10737.822 - 10788.234: 92.3035% ( 29) 00:07:50.854 10788.234 - 10838.646: 92.5014% ( 29) 00:07:50.854 10838.646 - 10889.058: 92.6856% ( 27) 00:07:50.854 10889.058 - 10939.471: 92.8221% ( 20) 00:07:50.854 10939.471 - 10989.883: 93.0063% ( 27) 00:07:50.854 10989.883 - 11040.295: 93.1905% ( 27) 00:07:50.854 11040.295 - 11090.708: 93.3474% ( 23) 00:07:50.854 11090.708 - 11141.120: 93.5044% ( 23) 00:07:50.854 11141.120 - 11191.532: 93.6954% ( 28) 00:07:50.854 11191.532 - 11241.945: 93.8523% ( 23) 00:07:50.854 11241.945 - 11292.357: 94.0093% ( 23) 00:07:50.854 11292.357 - 11342.769: 94.1321% ( 18) 00:07:50.854 11342.769 - 11393.182: 94.2481% ( 17) 00:07:50.854 11393.182 - 11443.594: 94.3709% ( 18) 00:07:50.854 11443.594 - 11494.006: 94.4937% ( 18) 00:07:50.854 11494.006 - 11544.418: 94.6916% ( 29) 00:07:50.854 11544.418 - 11594.831: 94.8417% ( 22) 00:07:50.854 11594.831 - 11645.243: 94.9918% ( 22) 00:07:50.854 11645.243 - 11695.655: 95.1351% ( 21) 00:07:50.854 11695.655 - 11746.068: 95.2920% ( 23) 00:07:50.854 11746.068 - 11796.480: 95.3807% ( 13) 00:07:50.854 11796.480 - 11846.892: 95.4694% ( 13) 00:07:50.854 11846.892 - 11897.305: 95.5308% ( 9) 00:07:50.854 11897.305 - 11947.717: 95.6195% ( 13) 00:07:50.854 11947.717 - 11998.129: 95.7219% ( 15) 00:07:50.854 11998.129 - 12048.542: 95.7969% ( 11) 00:07:50.854 12048.542 - 12098.954: 95.8925% ( 14) 00:07:50.854 12098.954 - 12149.366: 95.9471% ( 8) 00:07:50.854 12149.366 - 12199.778: 96.0085% ( 9) 00:07:50.854 12199.778 - 12250.191: 96.0903% ( 12) 00:07:50.854 12250.191 - 12300.603: 96.1245% ( 5) 00:07:50.854 12300.603 - 12351.015: 96.1654% ( 6) 00:07:50.854 12351.015 - 12401.428: 96.1995% ( 5) 00:07:50.854 12401.428 - 12451.840: 96.2473% ( 7) 00:07:50.854 12451.840 - 12502.252: 96.2677% ( 3) 00:07:50.854 12502.252 - 12552.665: 96.3087% ( 6) 00:07:50.854 12552.665 - 12603.077: 96.3223% ( 2) 00:07:50.854 12603.077 - 12653.489: 96.3701% ( 7) 00:07:50.854 12653.489 - 12703.902: 96.4520% ( 12) 00:07:50.854 12703.902 - 12754.314: 96.4861% ( 5) 00:07:50.854 12754.314 - 12804.726: 96.5475% ( 9) 00:07:50.854 12804.726 - 12855.138: 96.6225% ( 11) 00:07:50.854 12855.138 - 12905.551: 96.6362% ( 2) 00:07:50.854 12905.551 - 13006.375: 96.7522% ( 17) 00:07:50.854 13006.375 - 13107.200: 96.9432% ( 28) 00:07:50.854 13107.200 - 13208.025: 97.0388% ( 14) 00:07:50.854 13208.025 - 13308.849: 97.1479% ( 16) 00:07:50.854 13308.849 - 13409.674: 97.2639% ( 17) 00:07:50.854 13409.674 - 13510.498: 97.4004% ( 20) 00:07:50.854 13510.498 - 13611.323: 97.6051% ( 30) 00:07:50.854 13611.323 - 13712.148: 97.7006% ( 14) 00:07:50.854 13712.148 - 13812.972: 97.8439% ( 21) 00:07:50.854 13812.972 - 13913.797: 97.8848% ( 6) 00:07:50.854 13913.797 - 14014.622: 97.9326% ( 7) 00:07:50.854 14014.622 - 14115.446: 97.9940% ( 9) 00:07:50.854 14115.446 - 14216.271: 98.0554% ( 9) 00:07:50.854 14216.271 - 14317.095: 98.1305% ( 11) 00:07:50.854 14317.095 - 14417.920: 98.2192% ( 13) 00:07:50.854 14417.920 - 14518.745: 98.3079% ( 13) 00:07:50.854 14518.745 - 14619.569: 98.3966% ( 13) 00:07:50.854 14619.569 - 14720.394: 98.4443% ( 7) 00:07:50.854 14720.394 - 14821.218: 98.5057% ( 9) 00:07:50.854 14821.218 - 14922.043: 98.5671% ( 9) 00:07:50.854 14922.043 - 15022.868: 98.6285% ( 9) 00:07:50.854 15022.868 - 15123.692: 98.6968% ( 10) 00:07:50.854 15123.692 - 15224.517: 98.7718% ( 11) 00:07:50.854 15224.517 - 15325.342: 98.8401% ( 10) 00:07:50.854 15325.342 - 15426.166: 98.8947% ( 8) 00:07:50.854 15426.166 - 15526.991: 98.9765% ( 12) 00:07:50.854 15526.991 - 15627.815: 99.0038% ( 4) 00:07:50.854 15627.815 - 15728.640: 99.0311% ( 4) 00:07:50.854 15728.640 - 15829.465: 99.0652% ( 5) 00:07:50.854 15829.465 - 15930.289: 99.0925% ( 4) 00:07:50.854 15930.289 - 16031.114: 99.1198% ( 4) 00:07:50.854 16031.114 - 16131.938: 99.1266% ( 1) 00:07:50.854 22685.538 - 22786.363: 99.1539% ( 4) 00:07:50.854 22786.363 - 22887.188: 99.1744% ( 3) 00:07:50.854 22887.188 - 22988.012: 99.2017% ( 4) 00:07:50.854 22988.012 - 23088.837: 99.2290% ( 4) 00:07:50.854 23088.837 - 23189.662: 99.2495% ( 3) 00:07:50.854 23189.662 - 23290.486: 99.2767% ( 4) 00:07:50.854 23290.486 - 23391.311: 99.3109% ( 5) 00:07:50.854 23391.311 - 23492.135: 99.3313% ( 3) 00:07:50.854 23492.135 - 23592.960: 99.3586% ( 4) 00:07:50.854 23592.960 - 23693.785: 99.3723% ( 2) 00:07:50.854 23693.785 - 23794.609: 99.4064% ( 5) 00:07:50.854 23794.609 - 23895.434: 99.4405% ( 5) 00:07:50.854 23895.434 - 23996.258: 99.4610% ( 3) 00:07:50.854 23996.258 - 24097.083: 99.4883% ( 4) 00:07:50.854 24097.083 - 24197.908: 99.5087% ( 3) 00:07:50.854 24197.908 - 24298.732: 99.5360% ( 4) 00:07:50.854 24298.732 - 24399.557: 99.5565% ( 3) 00:07:50.854 24399.557 - 24500.382: 99.5633% ( 1) 00:07:50.854 28230.892 - 28432.542: 99.5974% ( 5) 00:07:50.854 28432.542 - 28634.191: 99.6452% ( 7) 00:07:50.854 28634.191 - 28835.840: 99.6930% ( 7) 00:07:50.854 28835.840 - 29037.489: 99.7475% ( 8) 00:07:50.854 29037.489 - 29239.138: 99.8021% ( 8) 00:07:50.854 29239.138 - 29440.788: 99.8499% ( 7) 00:07:50.854 29440.788 - 29642.437: 99.8977% ( 7) 00:07:50.854 29642.437 - 29844.086: 99.9522% ( 8) 00:07:50.854 29844.086 - 30045.735: 100.0000% ( 7) 00:07:50.854 00:07:50.854 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:50.854 ============================================================================== 00:07:50.854 Range in us Cumulative IO count 00:07:50.854 6654.425 - 6704.837: 0.0136% ( 2) 00:07:50.854 6704.837 - 6755.249: 0.0409% ( 4) 00:07:50.854 6755.249 - 6805.662: 0.2047% ( 24) 00:07:50.854 6805.662 - 6856.074: 0.4094% ( 30) 00:07:50.854 6856.074 - 6906.486: 0.4640% ( 8) 00:07:50.854 6906.486 - 6956.898: 0.5117% ( 7) 00:07:50.854 6956.898 - 7007.311: 0.5390% ( 4) 00:07:50.854 7007.311 - 7057.723: 0.5731% ( 5) 00:07:50.854 7057.723 - 7108.135: 0.6209% ( 7) 00:07:50.854 7108.135 - 7158.548: 0.7028% ( 12) 00:07:50.854 7158.548 - 7208.960: 0.8392% ( 20) 00:07:50.854 7208.960 - 7259.372: 0.9757% ( 20) 00:07:50.854 7259.372 - 7309.785: 1.3851% ( 60) 00:07:50.855 7309.785 - 7360.197: 1.8491% ( 68) 00:07:50.855 7360.197 - 7410.609: 2.4836% ( 93) 00:07:50.855 7410.609 - 7461.022: 3.2410% ( 111) 00:07:50.855 7461.022 - 7511.434: 4.2645% ( 150) 00:07:50.855 7511.434 - 7561.846: 5.4653% ( 176) 00:07:50.855 7561.846 - 7612.258: 6.9187% ( 213) 00:07:50.855 7612.258 - 7662.671: 8.6859% ( 259) 00:07:50.855 7662.671 - 7713.083: 10.8556% ( 318) 00:07:50.855 7713.083 - 7763.495: 13.4348% ( 378) 00:07:50.855 7763.495 - 7813.908: 16.6826% ( 476) 00:07:50.855 7813.908 - 7864.320: 20.1897% ( 514) 00:07:50.855 7864.320 - 7914.732: 24.0516% ( 566) 00:07:50.855 7914.732 - 7965.145: 27.9476% ( 571) 00:07:50.855 7965.145 - 8015.557: 31.8846% ( 577) 00:07:50.855 8015.557 - 8065.969: 36.0808% ( 615) 00:07:50.855 8065.969 - 8116.382: 40.4271% ( 637) 00:07:50.855 8116.382 - 8166.794: 44.7121% ( 628) 00:07:50.855 8166.794 - 8217.206: 48.9629% ( 623) 00:07:50.855 8217.206 - 8267.618: 52.9271% ( 581) 00:07:50.855 8267.618 - 8318.031: 56.6321% ( 543) 00:07:50.855 8318.031 - 8368.443: 60.0437% ( 500) 00:07:50.855 8368.443 - 8418.855: 63.3188% ( 480) 00:07:50.855 8418.855 - 8469.268: 66.1163% ( 410) 00:07:50.855 8469.268 - 8519.680: 68.4361% ( 340) 00:07:50.855 8519.680 - 8570.092: 70.6741% ( 328) 00:07:50.855 8570.092 - 8620.505: 72.7347% ( 302) 00:07:50.855 8620.505 - 8670.917: 74.7066% ( 289) 00:07:50.855 8670.917 - 8721.329: 76.5625% ( 272) 00:07:50.855 8721.329 - 8771.742: 78.0295% ( 215) 00:07:50.855 8771.742 - 8822.154: 79.4282% ( 205) 00:07:50.855 8822.154 - 8872.566: 80.4995% ( 157) 00:07:50.855 8872.566 - 8922.978: 81.3114% ( 119) 00:07:50.855 8922.978 - 8973.391: 82.1029% ( 116) 00:07:50.855 8973.391 - 9023.803: 82.7033% ( 88) 00:07:50.855 9023.803 - 9074.215: 83.2697% ( 83) 00:07:50.855 9074.215 - 9124.628: 83.7200% ( 66) 00:07:50.855 9124.628 - 9175.040: 84.1635% ( 65) 00:07:50.855 9175.040 - 9225.452: 84.6684% ( 74) 00:07:50.855 9225.452 - 9275.865: 85.0846% ( 61) 00:07:50.855 9275.865 - 9326.277: 85.4462% ( 53) 00:07:50.855 9326.277 - 9376.689: 85.8215% ( 55) 00:07:50.855 9376.689 - 9427.102: 86.1763% ( 52) 00:07:50.855 9427.102 - 9477.514: 86.4288% ( 37) 00:07:50.855 9477.514 - 9527.926: 86.7495% ( 47) 00:07:50.855 9527.926 - 9578.338: 86.9473% ( 29) 00:07:50.855 9578.338 - 9628.751: 87.1520% ( 30) 00:07:50.855 9628.751 - 9679.163: 87.4113% ( 38) 00:07:50.855 9679.163 - 9729.575: 87.6569% ( 36) 00:07:50.855 9729.575 - 9779.988: 87.9299% ( 40) 00:07:50.855 9779.988 - 9830.400: 88.1891% ( 38) 00:07:50.855 9830.400 - 9880.812: 88.3870% ( 29) 00:07:50.855 9880.812 - 9931.225: 88.5508% ( 24) 00:07:50.855 9931.225 - 9981.637: 88.6258% ( 11) 00:07:50.855 9981.637 - 10032.049: 88.7282% ( 15) 00:07:50.855 10032.049 - 10082.462: 88.8373% ( 16) 00:07:50.855 10082.462 - 10132.874: 88.9465% ( 16) 00:07:50.855 10132.874 - 10183.286: 89.1239% ( 26) 00:07:50.855 10183.286 - 10233.698: 89.3627% ( 35) 00:07:50.855 10233.698 - 10284.111: 89.5742% ( 31) 00:07:50.855 10284.111 - 10334.523: 89.8130% ( 35) 00:07:50.855 10334.523 - 10384.935: 90.1474% ( 49) 00:07:50.855 10384.935 - 10435.348: 90.4612% ( 46) 00:07:50.855 10435.348 - 10485.760: 90.7546% ( 43) 00:07:50.855 10485.760 - 10536.172: 91.0617% ( 45) 00:07:50.855 10536.172 - 10586.585: 91.3414% ( 41) 00:07:50.855 10586.585 - 10636.997: 91.6553% ( 46) 00:07:50.855 10636.997 - 10687.409: 91.9078% ( 37) 00:07:50.855 10687.409 - 10737.822: 92.1466% ( 35) 00:07:50.855 10737.822 - 10788.234: 92.3649% ( 32) 00:07:50.855 10788.234 - 10838.646: 92.5764% ( 31) 00:07:50.855 10838.646 - 10889.058: 92.8221% ( 36) 00:07:50.855 10889.058 - 10939.471: 93.0677% ( 36) 00:07:50.855 10939.471 - 10989.883: 93.3406% ( 40) 00:07:50.855 10989.883 - 11040.295: 93.6408% ( 44) 00:07:50.855 11040.295 - 11090.708: 93.8592% ( 32) 00:07:50.855 11090.708 - 11141.120: 94.0297% ( 25) 00:07:50.855 11141.120 - 11191.532: 94.2276% ( 29) 00:07:50.855 11191.532 - 11241.945: 94.4187% ( 28) 00:07:50.855 11241.945 - 11292.357: 94.5756% ( 23) 00:07:50.855 11292.357 - 11342.769: 94.7325% ( 23) 00:07:50.855 11342.769 - 11393.182: 94.8895% ( 23) 00:07:50.855 11393.182 - 11443.594: 95.0600% ( 25) 00:07:50.855 11443.594 - 11494.006: 95.2238% ( 24) 00:07:50.855 11494.006 - 11544.418: 95.3739% ( 22) 00:07:50.855 11544.418 - 11594.831: 95.4763% ( 15) 00:07:50.855 11594.831 - 11645.243: 95.5718% ( 14) 00:07:50.855 11645.243 - 11695.655: 95.6605% ( 13) 00:07:50.855 11695.655 - 11746.068: 95.7287% ( 10) 00:07:50.855 11746.068 - 11796.480: 95.8106% ( 12) 00:07:50.855 11796.480 - 11846.892: 95.8652% ( 8) 00:07:50.855 11846.892 - 11897.305: 95.9266% ( 9) 00:07:50.855 11897.305 - 11947.717: 95.9471% ( 3) 00:07:50.855 11947.717 - 11998.129: 95.9539% ( 1) 00:07:50.855 11998.129 - 12048.542: 95.9743% ( 3) 00:07:50.855 12048.542 - 12098.954: 96.0153% ( 6) 00:07:50.855 12098.954 - 12149.366: 96.0562% ( 6) 00:07:50.855 12149.366 - 12199.778: 96.1108% ( 8) 00:07:50.855 12199.778 - 12250.191: 96.1790% ( 10) 00:07:50.855 12250.191 - 12300.603: 96.2473% ( 10) 00:07:50.855 12300.603 - 12351.015: 96.3564% ( 16) 00:07:50.855 12351.015 - 12401.428: 96.4178% ( 9) 00:07:50.855 12401.428 - 12451.840: 96.4588% ( 6) 00:07:50.855 12451.840 - 12502.252: 96.4793% ( 3) 00:07:50.855 12502.252 - 12552.665: 96.4997% ( 3) 00:07:50.855 12552.665 - 12603.077: 96.5066% ( 1) 00:07:50.855 12653.489 - 12703.902: 96.5134% ( 1) 00:07:50.855 12754.314 - 12804.726: 96.5475% ( 5) 00:07:50.855 12804.726 - 12855.138: 96.5680% ( 3) 00:07:50.855 12855.138 - 12905.551: 96.5953% ( 4) 00:07:50.855 12905.551 - 13006.375: 96.6635% ( 10) 00:07:50.855 13006.375 - 13107.200: 96.7999% ( 20) 00:07:50.855 13107.200 - 13208.025: 97.0251% ( 33) 00:07:50.855 13208.025 - 13308.849: 97.1547% ( 19) 00:07:50.855 13308.849 - 13409.674: 97.3458% ( 28) 00:07:50.855 13409.674 - 13510.498: 97.4618% ( 17) 00:07:50.855 13510.498 - 13611.323: 97.5505% ( 13) 00:07:50.855 13611.323 - 13712.148: 97.6119% ( 9) 00:07:50.855 13712.148 - 13812.972: 97.6460% ( 5) 00:07:50.855 13812.972 - 13913.797: 97.7211% ( 11) 00:07:50.855 13913.797 - 14014.622: 97.8029% ( 12) 00:07:50.855 14014.622 - 14115.446: 97.8780% ( 11) 00:07:50.855 14115.446 - 14216.271: 97.9667% ( 13) 00:07:50.855 14216.271 - 14317.095: 98.0349% ( 10) 00:07:50.855 14317.095 - 14417.920: 98.0759% ( 6) 00:07:50.855 14417.920 - 14518.745: 98.1578% ( 12) 00:07:50.855 14518.745 - 14619.569: 98.1850% ( 4) 00:07:50.855 14619.569 - 14720.394: 98.2465% ( 9) 00:07:50.855 14720.394 - 14821.218: 98.3829% ( 20) 00:07:50.855 14821.218 - 14922.043: 98.6149% ( 34) 00:07:50.855 14922.043 - 15022.868: 98.7036% ( 13) 00:07:50.855 15022.868 - 15123.692: 98.7650% ( 9) 00:07:50.855 15123.692 - 15224.517: 98.8332% ( 10) 00:07:50.855 15224.517 - 15325.342: 98.8742% ( 6) 00:07:50.855 15325.342 - 15426.166: 98.8947% ( 3) 00:07:50.855 15426.166 - 15526.991: 98.9083% ( 2) 00:07:50.855 15526.991 - 15627.815: 98.9288% ( 3) 00:07:50.855 15627.815 - 15728.640: 98.9561% ( 4) 00:07:50.855 15728.640 - 15829.465: 98.9765% ( 3) 00:07:50.855 15829.465 - 15930.289: 99.0038% ( 4) 00:07:50.855 15930.289 - 16031.114: 99.0311% ( 4) 00:07:50.855 16031.114 - 16131.938: 99.0584% ( 4) 00:07:50.855 16131.938 - 16232.763: 99.0789% ( 3) 00:07:50.855 16232.763 - 16333.588: 99.1062% ( 4) 00:07:50.855 16333.588 - 16434.412: 99.1266% ( 3) 00:07:50.855 21173.169 - 21273.994: 99.1471% ( 3) 00:07:50.855 21273.994 - 21374.818: 99.1744% ( 4) 00:07:50.855 21374.818 - 21475.643: 99.2085% ( 5) 00:07:50.855 21475.643 - 21576.468: 99.2358% ( 4) 00:07:50.855 21576.468 - 21677.292: 99.2563% ( 3) 00:07:50.855 21677.292 - 21778.117: 99.2836% ( 4) 00:07:50.855 21778.117 - 21878.942: 99.3177% ( 5) 00:07:50.855 21878.942 - 21979.766: 99.3450% ( 4) 00:07:50.855 21979.766 - 22080.591: 99.3723% ( 4) 00:07:50.855 22080.591 - 22181.415: 99.3996% ( 4) 00:07:50.855 22181.415 - 22282.240: 99.4269% ( 4) 00:07:50.855 22282.240 - 22383.065: 99.4541% ( 4) 00:07:50.855 22383.065 - 22483.889: 99.4883% ( 5) 00:07:50.855 22483.889 - 22584.714: 99.5156% ( 4) 00:07:50.855 22584.714 - 22685.538: 99.5428% ( 4) 00:07:50.855 22685.538 - 22786.363: 99.5633% ( 3) 00:07:50.855 26416.049 - 26617.698: 99.5838% ( 3) 00:07:50.855 26617.698 - 26819.348: 99.6452% ( 9) 00:07:50.855 26819.348 - 27020.997: 99.6998% ( 8) 00:07:50.855 27020.997 - 27222.646: 99.7612% ( 9) 00:07:50.855 27222.646 - 27424.295: 99.8158% ( 8) 00:07:50.855 27424.295 - 27625.945: 99.8704% ( 8) 00:07:50.855 27625.945 - 27827.594: 99.9318% ( 9) 00:07:50.855 27827.594 - 28029.243: 99.9864% ( 8) 00:07:50.855 28029.243 - 28230.892: 100.0000% ( 2) 00:07:50.855 00:07:50.855 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:50.855 ============================================================================== 00:07:50.855 Range in us Cumulative IO count 00:07:50.855 6503.188 - 6553.600: 0.0068% ( 1) 00:07:50.855 6604.012 - 6654.425: 0.0205% ( 2) 00:07:50.855 6654.425 - 6704.837: 0.0478% ( 4) 00:07:50.855 6704.837 - 6755.249: 0.0819% ( 5) 00:07:50.855 6755.249 - 6805.662: 0.1228% ( 6) 00:07:50.855 6805.662 - 6856.074: 0.1842% ( 9) 00:07:50.855 6856.074 - 6906.486: 0.3002% ( 17) 00:07:50.855 6906.486 - 6956.898: 0.3480% ( 7) 00:07:50.855 6956.898 - 7007.311: 0.3957% ( 7) 00:07:50.855 7007.311 - 7057.723: 0.4708% ( 11) 00:07:50.855 7057.723 - 7108.135: 0.5527% ( 12) 00:07:50.855 7108.135 - 7158.548: 0.7096% ( 23) 00:07:50.855 7158.548 - 7208.960: 1.0098% ( 44) 00:07:50.855 7208.960 - 7259.372: 1.2555% ( 36) 00:07:50.855 7259.372 - 7309.785: 1.7535% ( 73) 00:07:50.855 7309.785 - 7360.197: 2.2653% ( 75) 00:07:50.855 7360.197 - 7410.609: 2.9067% ( 94) 00:07:50.855 7410.609 - 7461.022: 3.6026% ( 102) 00:07:50.855 7461.022 - 7511.434: 4.6739% ( 157) 00:07:50.855 7511.434 - 7561.846: 6.0317% ( 199) 00:07:50.855 7561.846 - 7612.258: 7.4099% ( 202) 00:07:50.855 7612.258 - 7662.671: 9.1089% ( 249) 00:07:50.855 7662.671 - 7713.083: 10.9921% ( 276) 00:07:50.855 7713.083 - 7763.495: 13.4552% ( 361) 00:07:50.856 7763.495 - 7813.908: 16.2050% ( 403) 00:07:50.856 7813.908 - 7864.320: 19.5824% ( 495) 00:07:50.856 7864.320 - 7914.732: 22.8916% ( 485) 00:07:50.856 7914.732 - 7965.145: 26.7467% ( 565) 00:07:50.856 7965.145 - 8015.557: 31.1477% ( 645) 00:07:50.856 8015.557 - 8065.969: 35.6646% ( 662) 00:07:50.856 8065.969 - 8116.382: 40.0382% ( 641) 00:07:50.856 8116.382 - 8166.794: 44.6711% ( 679) 00:07:50.856 8166.794 - 8217.206: 49.6861% ( 735) 00:07:50.856 8217.206 - 8267.618: 53.9370% ( 623) 00:07:50.856 8267.618 - 8318.031: 57.9694% ( 591) 00:07:50.856 8318.031 - 8368.443: 61.3810% ( 500) 00:07:50.856 8368.443 - 8418.855: 64.4787% ( 454) 00:07:50.856 8418.855 - 8469.268: 67.2626% ( 408) 00:07:50.856 8469.268 - 8519.680: 69.3095% ( 300) 00:07:50.856 8519.680 - 8570.092: 71.0699% ( 258) 00:07:50.856 8570.092 - 8620.505: 72.9735% ( 279) 00:07:50.856 8620.505 - 8670.917: 74.9932% ( 296) 00:07:50.856 8670.917 - 8721.329: 76.4602% ( 215) 00:07:50.856 8721.329 - 8771.742: 78.0090% ( 227) 00:07:50.856 8771.742 - 8822.154: 79.2508% ( 182) 00:07:50.856 8822.154 - 8872.566: 80.3766% ( 165) 00:07:50.856 8872.566 - 8922.978: 81.5161% ( 167) 00:07:50.856 8922.978 - 8973.391: 82.2462% ( 107) 00:07:50.856 8973.391 - 9023.803: 83.2014% ( 140) 00:07:50.856 9023.803 - 9074.215: 83.7200% ( 76) 00:07:50.856 9074.215 - 9124.628: 84.2112% ( 72) 00:07:50.856 9124.628 - 9175.040: 84.6820% ( 69) 00:07:50.856 9175.040 - 9225.452: 85.0914% ( 60) 00:07:50.856 9225.452 - 9275.865: 85.4872% ( 58) 00:07:50.856 9275.865 - 9326.277: 85.8010% ( 46) 00:07:50.856 9326.277 - 9376.689: 86.0535% ( 37) 00:07:50.856 9376.689 - 9427.102: 86.2718% ( 32) 00:07:50.856 9427.102 - 9477.514: 86.4356% ( 24) 00:07:50.856 9477.514 - 9527.926: 86.5789% ( 21) 00:07:50.856 9527.926 - 9578.338: 86.7495% ( 25) 00:07:50.856 9578.338 - 9628.751: 86.9473% ( 29) 00:07:50.856 9628.751 - 9679.163: 87.1111% ( 24) 00:07:50.856 9679.163 - 9729.575: 87.3567% ( 36) 00:07:50.856 9729.575 - 9779.988: 87.6296% ( 40) 00:07:50.856 9779.988 - 9830.400: 87.9162% ( 42) 00:07:50.856 9830.400 - 9880.812: 88.1141% ( 29) 00:07:50.856 9880.812 - 9931.225: 88.2847% ( 25) 00:07:50.856 9931.225 - 9981.637: 88.4757% ( 28) 00:07:50.856 9981.637 - 10032.049: 88.6668% ( 28) 00:07:50.856 10032.049 - 10082.462: 88.8646% ( 29) 00:07:50.856 10082.462 - 10132.874: 89.0420% ( 26) 00:07:50.856 10132.874 - 10183.286: 89.3013% ( 38) 00:07:50.856 10183.286 - 10233.698: 89.5060% ( 30) 00:07:50.856 10233.698 - 10284.111: 89.7789% ( 40) 00:07:50.856 10284.111 - 10334.523: 90.0860% ( 45) 00:07:50.856 10334.523 - 10384.935: 90.4476% ( 53) 00:07:50.856 10384.935 - 10435.348: 90.7273% ( 41) 00:07:50.856 10435.348 - 10485.760: 90.9525% ( 33) 00:07:50.856 10485.760 - 10536.172: 91.2664% ( 46) 00:07:50.856 10536.172 - 10586.585: 91.4711% ( 30) 00:07:50.856 10586.585 - 10636.997: 91.6758% ( 30) 00:07:50.856 10636.997 - 10687.409: 91.8532% ( 26) 00:07:50.856 10687.409 - 10737.822: 92.0442% ( 28) 00:07:50.856 10737.822 - 10788.234: 92.2080% ( 24) 00:07:50.856 10788.234 - 10838.646: 92.3581% ( 22) 00:07:50.856 10838.646 - 10889.058: 92.4741% ( 17) 00:07:50.856 10889.058 - 10939.471: 92.5901% ( 17) 00:07:50.856 10939.471 - 10989.883: 92.7129% ( 18) 00:07:50.856 10989.883 - 11040.295: 92.8971% ( 27) 00:07:50.856 11040.295 - 11090.708: 93.1700% ( 40) 00:07:50.856 11090.708 - 11141.120: 93.3611% ( 28) 00:07:50.856 11141.120 - 11191.532: 93.5385% ( 26) 00:07:50.856 11191.532 - 11241.945: 93.7227% ( 27) 00:07:50.856 11241.945 - 11292.357: 93.9342% ( 31) 00:07:50.856 11292.357 - 11342.769: 94.1389% ( 30) 00:07:50.856 11342.769 - 11393.182: 94.2959% ( 23) 00:07:50.856 11393.182 - 11443.594: 94.4187% ( 18) 00:07:50.856 11443.594 - 11494.006: 94.5961% ( 26) 00:07:50.856 11494.006 - 11544.418: 94.7325% ( 20) 00:07:50.856 11544.418 - 11594.831: 94.8826% ( 22) 00:07:50.856 11594.831 - 11645.243: 95.0259% ( 21) 00:07:50.856 11645.243 - 11695.655: 95.1829% ( 23) 00:07:50.856 11695.655 - 11746.068: 95.2989% ( 17) 00:07:50.856 11746.068 - 11796.480: 95.4353% ( 20) 00:07:50.856 11796.480 - 11846.892: 95.5513% ( 17) 00:07:50.856 11846.892 - 11897.305: 95.6537% ( 15) 00:07:50.856 11897.305 - 11947.717: 95.7628% ( 16) 00:07:50.856 11947.717 - 11998.129: 95.8515% ( 13) 00:07:50.856 11998.129 - 12048.542: 95.9129% ( 9) 00:07:50.856 12048.542 - 12098.954: 95.9743% ( 9) 00:07:50.856 12098.954 - 12149.366: 96.0016% ( 4) 00:07:50.856 12149.366 - 12199.778: 96.0289% ( 4) 00:07:50.856 12199.778 - 12250.191: 96.0494% ( 3) 00:07:50.856 12250.191 - 12300.603: 96.0835% ( 5) 00:07:50.856 12300.603 - 12351.015: 96.1040% ( 3) 00:07:50.856 12351.015 - 12401.428: 96.1313% ( 4) 00:07:50.856 12401.428 - 12451.840: 96.1586% ( 4) 00:07:50.856 12451.840 - 12502.252: 96.1927% ( 5) 00:07:50.856 12502.252 - 12552.665: 96.2132% ( 3) 00:07:50.856 12552.665 - 12603.077: 96.2541% ( 6) 00:07:50.856 12603.077 - 12653.489: 96.3291% ( 11) 00:07:50.856 12653.489 - 12703.902: 96.4110% ( 12) 00:07:50.856 12703.902 - 12754.314: 96.4588% ( 7) 00:07:50.856 12754.314 - 12804.726: 96.5202% ( 9) 00:07:50.856 12804.726 - 12855.138: 96.5748% ( 8) 00:07:50.856 12855.138 - 12905.551: 96.6225% ( 7) 00:07:50.856 12905.551 - 13006.375: 96.7317% ( 16) 00:07:50.856 13006.375 - 13107.200: 96.8204% ( 13) 00:07:50.856 13107.200 - 13208.025: 96.9159% ( 14) 00:07:50.856 13208.025 - 13308.849: 97.0183% ( 15) 00:07:50.856 13308.849 - 13409.674: 97.1957% ( 26) 00:07:50.856 13409.674 - 13510.498: 97.3867% ( 28) 00:07:50.856 13510.498 - 13611.323: 97.5778% ( 28) 00:07:50.856 13611.323 - 13712.148: 97.6801% ( 15) 00:07:50.856 13712.148 - 13812.972: 97.8439% ( 24) 00:07:50.856 13812.972 - 13913.797: 97.9940% ( 22) 00:07:50.856 13913.797 - 14014.622: 98.1305% ( 20) 00:07:50.856 14014.622 - 14115.446: 98.2260% ( 14) 00:07:50.856 14115.446 - 14216.271: 98.4034% ( 26) 00:07:50.856 14216.271 - 14317.095: 98.4989% ( 14) 00:07:50.856 14317.095 - 14417.920: 98.5671% ( 10) 00:07:50.856 14417.920 - 14518.745: 98.6490% ( 12) 00:07:50.856 14518.745 - 14619.569: 98.7172% ( 10) 00:07:50.856 14619.569 - 14720.394: 98.7991% ( 12) 00:07:50.856 14720.394 - 14821.218: 98.8674% ( 10) 00:07:50.856 14821.218 - 14922.043: 98.9424% ( 11) 00:07:50.856 14922.043 - 15022.868: 98.9834% ( 6) 00:07:50.856 15022.868 - 15123.692: 99.0448% ( 9) 00:07:50.856 15123.692 - 15224.517: 99.0925% ( 7) 00:07:50.856 15224.517 - 15325.342: 99.1130% ( 3) 00:07:50.856 15325.342 - 15426.166: 99.1266% ( 2) 00:07:50.856 20064.098 - 20164.923: 99.1471% ( 3) 00:07:50.856 20164.923 - 20265.748: 99.1744% ( 4) 00:07:50.856 20265.748 - 20366.572: 99.2017% ( 4) 00:07:50.856 20366.572 - 20467.397: 99.2290% ( 4) 00:07:50.856 20467.397 - 20568.222: 99.2631% ( 5) 00:07:50.856 20568.222 - 20669.046: 99.2904% ( 4) 00:07:50.856 20669.046 - 20769.871: 99.3177% ( 4) 00:07:50.856 20769.871 - 20870.695: 99.3382% ( 3) 00:07:50.856 20870.695 - 20971.520: 99.3654% ( 4) 00:07:50.856 20971.520 - 21072.345: 99.3996% ( 5) 00:07:50.856 21072.345 - 21173.169: 99.4269% ( 4) 00:07:50.856 21173.169 - 21273.994: 99.4541% ( 4) 00:07:50.856 21273.994 - 21374.818: 99.4883% ( 5) 00:07:50.856 21374.818 - 21475.643: 99.5087% ( 3) 00:07:50.856 21475.643 - 21576.468: 99.5428% ( 5) 00:07:50.856 21576.468 - 21677.292: 99.5633% ( 3) 00:07:50.856 25508.628 - 25609.452: 99.5770% ( 2) 00:07:50.856 25609.452 - 25710.277: 99.5974% ( 3) 00:07:50.856 25710.277 - 25811.102: 99.6316% ( 5) 00:07:50.856 25811.102 - 26012.751: 99.6793% ( 7) 00:07:50.856 26012.751 - 26214.400: 99.7407% ( 9) 00:07:50.856 26214.400 - 26416.049: 99.7953% ( 8) 00:07:50.856 26416.049 - 26617.698: 99.8499% ( 8) 00:07:50.856 26617.698 - 26819.348: 99.9113% ( 9) 00:07:50.856 26819.348 - 27020.997: 99.9727% ( 9) 00:07:50.856 27020.997 - 27222.646: 100.0000% ( 4) 00:07:50.856 00:07:50.856 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:50.856 ============================================================================== 00:07:50.856 Range in us Cumulative IO count 00:07:50.856 6427.569 - 6452.775: 0.0068% ( 1) 00:07:50.856 6452.775 - 6503.188: 0.0340% ( 4) 00:07:50.856 6503.188 - 6553.600: 0.0611% ( 4) 00:07:50.856 6553.600 - 6604.012: 0.0883% ( 4) 00:07:50.856 6604.012 - 6654.425: 0.1155% ( 4) 00:07:50.856 6654.425 - 6704.837: 0.1427% ( 4) 00:07:50.856 6704.837 - 6755.249: 0.2378% ( 14) 00:07:50.856 6755.249 - 6805.662: 0.2785% ( 6) 00:07:50.856 6805.662 - 6856.074: 0.3125% ( 5) 00:07:50.856 6856.074 - 6906.486: 0.3397% ( 4) 00:07:50.856 6906.486 - 6956.898: 0.3668% ( 4) 00:07:50.856 6956.898 - 7007.311: 0.4144% ( 7) 00:07:50.856 7007.311 - 7057.723: 0.5299% ( 17) 00:07:50.856 7057.723 - 7108.135: 0.6522% ( 18) 00:07:50.856 7108.135 - 7158.548: 0.9375% ( 42) 00:07:50.856 7158.548 - 7208.960: 1.1209% ( 27) 00:07:50.856 7208.960 - 7259.372: 1.4334% ( 46) 00:07:50.856 7259.372 - 7309.785: 1.8410% ( 60) 00:07:50.857 7309.785 - 7360.197: 2.2962% ( 67) 00:07:50.857 7360.197 - 7410.609: 2.9823% ( 101) 00:07:50.857 7410.609 - 7461.022: 3.7364% ( 111) 00:07:50.857 7461.022 - 7511.434: 4.7215% ( 145) 00:07:50.857 7511.434 - 7561.846: 5.9171% ( 176) 00:07:50.857 7561.846 - 7612.258: 7.4932% ( 232) 00:07:50.857 7612.258 - 7662.671: 9.1644% ( 246) 00:07:50.857 7662.671 - 7713.083: 11.1073% ( 286) 00:07:50.857 7713.083 - 7763.495: 13.5190% ( 355) 00:07:50.857 7763.495 - 7813.908: 16.0394% ( 371) 00:07:50.857 7813.908 - 7864.320: 18.9130% ( 423) 00:07:50.857 7864.320 - 7914.732: 22.4592% ( 522) 00:07:50.857 7914.732 - 7965.145: 26.2772% ( 562) 00:07:50.857 7965.145 - 8015.557: 30.4823% ( 619) 00:07:50.857 8015.557 - 8065.969: 35.1087% ( 681) 00:07:50.857 8065.969 - 8116.382: 39.6875% ( 674) 00:07:50.857 8116.382 - 8166.794: 44.0965% ( 649) 00:07:50.857 8166.794 - 8217.206: 48.3832% ( 631) 00:07:50.857 8217.206 - 8267.618: 52.2283% ( 566) 00:07:50.857 8267.618 - 8318.031: 55.8832% ( 538) 00:07:50.857 8318.031 - 8368.443: 59.6739% ( 558) 00:07:50.857 8368.443 - 8418.855: 63.2201% ( 522) 00:07:50.857 8418.855 - 8469.268: 66.1209% ( 427) 00:07:50.857 8469.268 - 8519.680: 68.7704% ( 390) 00:07:50.857 8519.680 - 8570.092: 70.8899% ( 312) 00:07:50.857 8570.092 - 8620.505: 73.0707% ( 321) 00:07:50.857 8620.505 - 8670.917: 75.3397% ( 334) 00:07:50.857 8670.917 - 8721.329: 77.1128% ( 261) 00:07:50.857 8721.329 - 8771.742: 78.5802% ( 216) 00:07:50.857 8771.742 - 8822.154: 80.1359% ( 229) 00:07:50.857 8822.154 - 8872.566: 81.1685% ( 152) 00:07:50.857 8872.566 - 8922.978: 82.1535% ( 145) 00:07:50.857 8922.978 - 8973.391: 82.9280% ( 114) 00:07:50.857 8973.391 - 9023.803: 83.6481% ( 106) 00:07:50.857 9023.803 - 9074.215: 84.1372% ( 72) 00:07:50.857 9074.215 - 9124.628: 84.6739% ( 79) 00:07:50.857 9124.628 - 9175.040: 85.1970% ( 77) 00:07:50.857 9175.040 - 9225.452: 85.5435% ( 51) 00:07:50.857 9225.452 - 9275.865: 85.8288% ( 42) 00:07:50.857 9275.865 - 9326.277: 86.1413% ( 46) 00:07:50.857 9326.277 - 9376.689: 86.3791% ( 35) 00:07:50.857 9376.689 - 9427.102: 86.6033% ( 33) 00:07:50.857 9427.102 - 9477.514: 86.7663% ( 24) 00:07:50.857 9477.514 - 9527.926: 86.9701% ( 30) 00:07:50.857 9527.926 - 9578.338: 87.0652% ( 14) 00:07:50.857 9578.338 - 9628.751: 87.2554% ( 28) 00:07:50.857 9628.751 - 9679.163: 87.4389% ( 27) 00:07:50.857 9679.163 - 9729.575: 87.6562% ( 32) 00:07:50.857 9729.575 - 9779.988: 87.9008% ( 36) 00:07:50.857 9779.988 - 9830.400: 88.1726% ( 40) 00:07:50.857 9830.400 - 9880.812: 88.5122% ( 50) 00:07:50.857 9880.812 - 9931.225: 88.7840% ( 40) 00:07:50.857 9931.225 - 9981.637: 89.1168% ( 49) 00:07:50.857 9981.637 - 10032.049: 89.3071% ( 28) 00:07:50.857 10032.049 - 10082.462: 89.5041% ( 29) 00:07:50.857 10082.462 - 10132.874: 89.6535% ( 22) 00:07:50.857 10132.874 - 10183.286: 89.8098% ( 23) 00:07:50.857 10183.286 - 10233.698: 90.0611% ( 37) 00:07:50.857 10233.698 - 10284.111: 90.2921% ( 34) 00:07:50.857 10284.111 - 10334.523: 90.4959% ( 30) 00:07:50.857 10334.523 - 10384.935: 90.6997% ( 30) 00:07:50.857 10384.935 - 10435.348: 90.9103% ( 31) 00:07:50.857 10435.348 - 10485.760: 91.1073% ( 29) 00:07:50.857 10485.760 - 10536.172: 91.2772% ( 25) 00:07:50.857 10536.172 - 10586.585: 91.4810% ( 30) 00:07:50.857 10586.585 - 10636.997: 91.7052% ( 33) 00:07:50.857 10636.997 - 10687.409: 92.0177% ( 46) 00:07:50.857 10687.409 - 10737.822: 92.2215% ( 30) 00:07:50.857 10737.822 - 10788.234: 92.4592% ( 35) 00:07:50.857 10788.234 - 10838.646: 92.6562% ( 29) 00:07:50.857 10838.646 - 10889.058: 92.8261% ( 25) 00:07:50.857 10889.058 - 10939.471: 92.9823% ( 23) 00:07:50.857 10939.471 - 10989.883: 93.0910% ( 16) 00:07:50.857 10989.883 - 11040.295: 93.2405% ( 22) 00:07:50.857 11040.295 - 11090.708: 93.3696% ( 19) 00:07:50.857 11090.708 - 11141.120: 93.5054% ( 20) 00:07:50.857 11141.120 - 11191.532: 93.6753% ( 25) 00:07:50.857 11191.532 - 11241.945: 93.7636% ( 13) 00:07:50.857 11241.945 - 11292.357: 93.8451% ( 12) 00:07:50.857 11292.357 - 11342.769: 93.9266% ( 12) 00:07:50.857 11342.769 - 11393.182: 94.0082% ( 12) 00:07:50.857 11393.182 - 11443.594: 94.1101% ( 15) 00:07:50.857 11443.594 - 11494.006: 94.2188% ( 16) 00:07:50.857 11494.006 - 11544.418: 94.3682% ( 22) 00:07:50.857 11544.418 - 11594.831: 94.4497% ( 12) 00:07:50.857 11594.831 - 11645.243: 94.5992% ( 22) 00:07:50.857 11645.243 - 11695.655: 94.7622% ( 24) 00:07:50.857 11695.655 - 11746.068: 94.8709% ( 16) 00:07:50.857 11746.068 - 11796.480: 94.9660% ( 14) 00:07:50.857 11796.480 - 11846.892: 95.0611% ( 14) 00:07:50.857 11846.892 - 11897.305: 95.1427% ( 12) 00:07:50.857 11897.305 - 11947.717: 95.2582% ( 17) 00:07:50.857 11947.717 - 11998.129: 95.3736% ( 17) 00:07:50.857 11998.129 - 12048.542: 95.4688% ( 14) 00:07:50.857 12048.542 - 12098.954: 95.5503% ( 12) 00:07:50.857 12098.954 - 12149.366: 95.6386% ( 13) 00:07:50.857 12149.366 - 12199.778: 95.7269% ( 13) 00:07:50.857 12199.778 - 12250.191: 95.8084% ( 12) 00:07:50.857 12250.191 - 12300.603: 95.8832% ( 11) 00:07:50.857 12300.603 - 12351.015: 95.9307% ( 7) 00:07:50.857 12351.015 - 12401.428: 95.9715% ( 6) 00:07:50.857 12401.428 - 12451.840: 95.9918% ( 3) 00:07:50.857 12451.840 - 12502.252: 96.0326% ( 6) 00:07:50.857 12502.252 - 12552.665: 96.0666% ( 5) 00:07:50.857 12552.665 - 12603.077: 96.1141% ( 7) 00:07:50.857 12603.077 - 12653.489: 96.1685% ( 8) 00:07:50.857 12653.489 - 12703.902: 96.2636% ( 14) 00:07:50.857 12703.902 - 12754.314: 96.3519% ( 13) 00:07:50.857 12754.314 - 12804.726: 96.4062% ( 8) 00:07:50.857 12804.726 - 12855.138: 96.4674% ( 9) 00:07:50.857 12855.138 - 12905.551: 96.5149% ( 7) 00:07:50.857 12905.551 - 13006.375: 96.6236% ( 16) 00:07:50.857 13006.375 - 13107.200: 96.7052% ( 12) 00:07:50.857 13107.200 - 13208.025: 96.7935% ( 13) 00:07:50.857 13208.025 - 13308.849: 96.8614% ( 10) 00:07:50.857 13308.849 - 13409.674: 96.9633% ( 15) 00:07:50.857 13409.674 - 13510.498: 97.2283% ( 39) 00:07:50.857 13510.498 - 13611.323: 97.5068% ( 41) 00:07:50.857 13611.323 - 13712.148: 97.7174% ( 31) 00:07:50.857 13712.148 - 13812.972: 97.8736% ( 23) 00:07:50.857 13812.972 - 13913.797: 98.0027% ( 19) 00:07:50.857 13913.797 - 14014.622: 98.1658% ( 24) 00:07:50.857 14014.622 - 14115.446: 98.2745% ( 16) 00:07:50.857 14115.446 - 14216.271: 98.3967% ( 18) 00:07:50.857 14216.271 - 14317.095: 98.5326% ( 20) 00:07:50.857 14317.095 - 14417.920: 98.6685% ( 20) 00:07:50.857 14417.920 - 14518.745: 98.7976% ( 19) 00:07:50.857 14518.745 - 14619.569: 98.9062% ( 16) 00:07:50.857 14619.569 - 14720.394: 98.9946% ( 13) 00:07:50.857 14720.394 - 14821.218: 99.0829% ( 13) 00:07:50.857 14821.218 - 14922.043: 99.1712% ( 13) 00:07:50.857 14922.043 - 15022.868: 99.2459% ( 11) 00:07:50.857 15022.868 - 15123.692: 99.3410% ( 14) 00:07:50.857 15123.692 - 15224.517: 99.4022% ( 9) 00:07:50.857 15224.517 - 15325.342: 99.4633% ( 9) 00:07:50.857 15325.342 - 15426.166: 99.4973% ( 5) 00:07:50.857 15426.166 - 15526.991: 99.5245% ( 4) 00:07:50.857 15526.991 - 15627.815: 99.5516% ( 4) 00:07:50.857 15627.815 - 15728.640: 99.5652% ( 2) 00:07:50.857 19559.975 - 19660.800: 99.5856% ( 3) 00:07:50.857 19660.800 - 19761.625: 99.6128% ( 4) 00:07:50.857 19761.625 - 19862.449: 99.6399% ( 4) 00:07:50.857 19862.449 - 19963.274: 99.6739% ( 5) 00:07:50.857 19963.274 - 20064.098: 99.7011% ( 4) 00:07:50.857 20064.098 - 20164.923: 99.7283% ( 4) 00:07:50.857 20164.923 - 20265.748: 99.7554% ( 4) 00:07:50.857 20265.748 - 20366.572: 99.7826% ( 4) 00:07:50.857 20366.572 - 20467.397: 99.8098% ( 4) 00:07:50.857 20467.397 - 20568.222: 99.8370% ( 4) 00:07:50.857 20568.222 - 20669.046: 99.8709% ( 5) 00:07:50.857 20669.046 - 20769.871: 99.8981% ( 4) 00:07:50.857 20769.871 - 20870.695: 99.9253% ( 4) 00:07:50.857 20870.695 - 20971.520: 99.9524% ( 4) 00:07:50.857 20971.520 - 21072.345: 99.9796% ( 4) 00:07:50.857 21072.345 - 21173.169: 100.0000% ( 3) 00:07:50.857 00:07:50.857 13:42:04 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:50.857 00:07:50.857 real 0m2.462s 00:07:50.857 user 0m2.200s 00:07:50.857 sys 0m0.173s 00:07:50.857 13:42:04 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.857 13:42:04 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:50.857 ************************************ 00:07:50.857 END TEST nvme_perf 00:07:50.857 ************************************ 00:07:50.857 13:42:04 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:50.857 13:42:04 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:50.857 13:42:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.857 13:42:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.857 ************************************ 00:07:50.857 START TEST nvme_hello_world 00:07:50.857 ************************************ 00:07:50.857 13:42:04 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:51.119 Initializing NVMe Controllers 00:07:51.119 Attached to 0000:00:11.0 00:07:51.119 Namespace ID: 1 size: 5GB 00:07:51.119 Attached to 0000:00:13.0 00:07:51.119 Namespace ID: 1 size: 1GB 00:07:51.119 Attached to 0000:00:10.0 00:07:51.119 Namespace ID: 1 size: 6GB 00:07:51.119 Attached to 0000:00:12.0 00:07:51.119 Namespace ID: 1 size: 4GB 00:07:51.119 Namespace ID: 2 size: 4GB 00:07:51.119 Namespace ID: 3 size: 4GB 00:07:51.119 Initialization complete. 00:07:51.119 INFO: using host memory buffer for IO 00:07:51.119 Hello world! 00:07:51.119 INFO: using host memory buffer for IO 00:07:51.119 Hello world! 00:07:51.119 INFO: using host memory buffer for IO 00:07:51.119 Hello world! 00:07:51.119 INFO: using host memory buffer for IO 00:07:51.119 Hello world! 00:07:51.119 INFO: using host memory buffer for IO 00:07:51.119 Hello world! 00:07:51.119 INFO: using host memory buffer for IO 00:07:51.119 Hello world! 00:07:51.119 ************************************ 00:07:51.119 END TEST nvme_hello_world 00:07:51.119 ************************************ 00:07:51.119 00:07:51.119 real 0m0.204s 00:07:51.119 user 0m0.075s 00:07:51.119 sys 0m0.084s 00:07:51.119 13:42:04 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.119 13:42:04 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:51.119 13:42:04 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:51.119 13:42:04 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.119 13:42:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.119 13:42:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.119 ************************************ 00:07:51.119 START TEST nvme_sgl 00:07:51.119 ************************************ 00:07:51.119 13:42:04 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:51.381 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:51.381 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:51.381 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:51.381 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:51.381 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:51.381 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:51.381 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:51.381 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:51.381 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:51.381 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:51.381 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:51.381 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:51.381 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:51.381 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:51.381 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:51.381 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:51.381 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:51.381 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:51.381 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:51.381 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:51.381 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:51.381 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:51.381 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:51.381 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:51.381 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:51.381 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:51.381 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:51.381 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:51.381 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:51.381 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:51.381 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:51.381 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:51.381 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:51.381 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:51.381 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:51.381 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:51.381 NVMe Readv/Writev Request test 00:07:51.381 Attached to 0000:00:11.0 00:07:51.381 Attached to 0000:00:13.0 00:07:51.381 Attached to 0000:00:10.0 00:07:51.381 Attached to 0000:00:12.0 00:07:51.381 0000:00:11.0: build_io_request_2 test passed 00:07:51.381 0000:00:11.0: build_io_request_4 test passed 00:07:51.381 0000:00:11.0: build_io_request_5 test passed 00:07:51.381 0000:00:11.0: build_io_request_6 test passed 00:07:51.381 0000:00:11.0: build_io_request_7 test passed 00:07:51.381 0000:00:11.0: build_io_request_10 test passed 00:07:51.381 0000:00:10.0: build_io_request_2 test passed 00:07:51.381 0000:00:10.0: build_io_request_4 test passed 00:07:51.381 0000:00:10.0: build_io_request_5 test passed 00:07:51.381 0000:00:10.0: build_io_request_6 test passed 00:07:51.381 0000:00:10.0: build_io_request_7 test passed 00:07:51.381 0000:00:10.0: build_io_request_10 test passed 00:07:51.381 Cleaning up... 00:07:51.381 00:07:51.381 real 0m0.278s 00:07:51.381 user 0m0.142s 00:07:51.381 sys 0m0.088s 00:07:51.381 13:42:05 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.381 13:42:05 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:51.381 ************************************ 00:07:51.381 END TEST nvme_sgl 00:07:51.381 ************************************ 00:07:51.381 13:42:05 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:51.381 13:42:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.381 13:42:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.381 13:42:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.381 ************************************ 00:07:51.381 START TEST nvme_e2edp 00:07:51.381 ************************************ 00:07:51.381 13:42:05 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:51.641 NVMe Write/Read with End-to-End data protection test 00:07:51.641 Attached to 0000:00:11.0 00:07:51.641 Attached to 0000:00:13.0 00:07:51.641 Attached to 0000:00:10.0 00:07:51.641 Attached to 0000:00:12.0 00:07:51.642 Cleaning up... 00:07:51.642 00:07:51.642 real 0m0.192s 00:07:51.642 user 0m0.061s 00:07:51.642 sys 0m0.085s 00:07:51.642 13:42:05 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.642 ************************************ 00:07:51.642 END TEST nvme_e2edp 00:07:51.642 ************************************ 00:07:51.642 13:42:05 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:51.642 13:42:05 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:51.642 13:42:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.642 13:42:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.642 13:42:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.642 ************************************ 00:07:51.642 START TEST nvme_reserve 00:07:51.642 ************************************ 00:07:51.642 13:42:05 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:51.903 ===================================================== 00:07:51.903 NVMe Controller at PCI bus 0, device 17, function 0 00:07:51.903 ===================================================== 00:07:51.903 Reservations: Not Supported 00:07:51.903 ===================================================== 00:07:51.903 NVMe Controller at PCI bus 0, device 19, function 0 00:07:51.903 ===================================================== 00:07:51.903 Reservations: Not Supported 00:07:51.903 ===================================================== 00:07:51.903 NVMe Controller at PCI bus 0, device 16, function 0 00:07:51.903 ===================================================== 00:07:51.903 Reservations: Not Supported 00:07:51.903 ===================================================== 00:07:51.903 NVMe Controller at PCI bus 0, device 18, function 0 00:07:51.903 ===================================================== 00:07:51.903 Reservations: Not Supported 00:07:51.903 Reservation test passed 00:07:51.903 00:07:51.903 real 0m0.204s 00:07:51.903 user 0m0.071s 00:07:51.903 sys 0m0.090s 00:07:51.903 ************************************ 00:07:51.903 END TEST nvme_reserve 00:07:51.903 ************************************ 00:07:51.903 13:42:05 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.903 13:42:05 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:51.903 13:42:05 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:51.903 13:42:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.903 13:42:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.903 13:42:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.903 ************************************ 00:07:51.903 START TEST nvme_err_injection 00:07:51.903 ************************************ 00:07:51.903 13:42:05 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:52.165 NVMe Error Injection test 00:07:52.165 Attached to 0000:00:11.0 00:07:52.165 Attached to 0000:00:13.0 00:07:52.165 Attached to 0000:00:10.0 00:07:52.165 Attached to 0000:00:12.0 00:07:52.165 0000:00:11.0: get features failed as expected 00:07:52.165 0000:00:13.0: get features failed as expected 00:07:52.165 0000:00:10.0: get features failed as expected 00:07:52.165 0000:00:12.0: get features failed as expected 00:07:52.165 0000:00:11.0: get features successfully as expected 00:07:52.165 0000:00:13.0: get features successfully as expected 00:07:52.165 0000:00:10.0: get features successfully as expected 00:07:52.165 0000:00:12.0: get features successfully as expected 00:07:52.165 0000:00:12.0: read failed as expected 00:07:52.165 0000:00:10.0: read failed as expected 00:07:52.165 0000:00:13.0: read failed as expected 00:07:52.165 0000:00:11.0: read failed as expected 00:07:52.165 0000:00:11.0: read successfully as expected 00:07:52.165 0000:00:13.0: read successfully as expected 00:07:52.165 0000:00:10.0: read successfully as expected 00:07:52.165 0000:00:12.0: read successfully as expected 00:07:52.165 Cleaning up... 00:07:52.165 00:07:52.165 real 0m0.213s 00:07:52.165 user 0m0.077s 00:07:52.165 sys 0m0.088s 00:07:52.165 13:42:05 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.165 13:42:05 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:52.165 ************************************ 00:07:52.165 END TEST nvme_err_injection 00:07:52.165 ************************************ 00:07:52.165 13:42:05 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:52.165 13:42:05 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:52.165 13:42:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.165 13:42:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.165 ************************************ 00:07:52.165 START TEST nvme_overhead 00:07:52.165 ************************************ 00:07:52.165 13:42:05 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:53.553 Initializing NVMe Controllers 00:07:53.553 Attached to 0000:00:11.0 00:07:53.553 Attached to 0000:00:13.0 00:07:53.553 Attached to 0000:00:10.0 00:07:53.553 Attached to 0000:00:12.0 00:07:53.553 Initialization complete. Launching workers. 00:07:53.553 submit (in ns) avg, min, max = 11440.2, 9670.0, 89652.3 00:07:53.553 complete (in ns) avg, min, max = 7536.8, 7164.6, 48996.2 00:07:53.553 00:07:53.553 Submit histogram 00:07:53.553 ================ 00:07:53.553 Range in us Cumulative Count 00:07:53.553 9.649 - 9.698: 0.0114% ( 1) 00:07:53.553 9.994 - 10.043: 0.0342% ( 2) 00:07:53.553 10.092 - 10.142: 0.0456% ( 1) 00:07:53.553 10.240 - 10.289: 0.0570% ( 1) 00:07:53.553 10.585 - 10.634: 0.0684% ( 1) 00:07:53.553 10.732 - 10.782: 0.1026% ( 3) 00:07:53.553 10.782 - 10.831: 0.2166% ( 10) 00:07:53.553 10.831 - 10.880: 0.6271% ( 36) 00:07:53.553 10.880 - 10.929: 1.9384% ( 115) 00:07:53.553 10.929 - 10.978: 5.4162% ( 305) 00:07:53.553 10.978 - 11.028: 11.8016% ( 560) 00:07:53.553 11.028 - 11.077: 21.9840% ( 893) 00:07:53.553 11.077 - 11.126: 34.8689% ( 1130) 00:07:53.553 11.126 - 11.175: 47.0468% ( 1068) 00:07:53.553 11.175 - 11.225: 58.4949% ( 1004) 00:07:53.553 11.225 - 11.274: 67.4914% ( 789) 00:07:53.553 11.274 - 11.323: 74.2189% ( 590) 00:07:53.553 11.323 - 11.372: 79.3729% ( 452) 00:07:53.553 11.372 - 11.422: 82.5998% ( 283) 00:07:53.554 11.422 - 11.471: 84.5610% ( 172) 00:07:53.554 11.471 - 11.520: 86.0205% ( 128) 00:07:53.554 11.520 - 11.569: 86.7617% ( 65) 00:07:53.554 11.569 - 11.618: 87.4230% ( 58) 00:07:53.554 11.618 - 11.668: 87.7993% ( 33) 00:07:53.554 11.668 - 11.717: 88.1642% ( 32) 00:07:53.554 11.717 - 11.766: 88.4721% ( 27) 00:07:53.554 11.766 - 11.815: 88.9282% ( 40) 00:07:53.554 11.815 - 11.865: 89.4641% ( 47) 00:07:53.554 11.865 - 11.914: 90.0456% ( 51) 00:07:53.554 11.914 - 11.963: 90.7184% ( 59) 00:07:53.554 11.963 - 12.012: 91.6306% ( 80) 00:07:53.554 12.012 - 12.062: 92.4059% ( 68) 00:07:53.554 12.062 - 12.111: 93.1699% ( 67) 00:07:53.554 12.111 - 12.160: 93.8198% ( 57) 00:07:53.554 12.160 - 12.209: 94.5610% ( 65) 00:07:53.554 12.209 - 12.258: 94.9715% ( 36) 00:07:53.554 12.258 - 12.308: 95.3592% ( 34) 00:07:53.554 12.308 - 12.357: 95.5416% ( 16) 00:07:53.554 12.357 - 12.406: 95.7583% ( 19) 00:07:53.554 12.406 - 12.455: 95.8609% ( 9) 00:07:53.554 12.455 - 12.505: 96.0319% ( 15) 00:07:53.554 12.505 - 12.554: 96.1231% ( 8) 00:07:53.554 12.554 - 12.603: 96.1916% ( 6) 00:07:53.554 12.603 - 12.702: 96.3056% ( 10) 00:07:53.554 12.702 - 12.800: 96.3398% ( 3) 00:07:53.554 12.800 - 12.898: 96.3854% ( 4) 00:07:53.554 12.898 - 12.997: 96.3968% ( 1) 00:07:53.554 12.997 - 13.095: 96.4652% ( 6) 00:07:53.554 13.095 - 13.194: 96.5450% ( 7) 00:07:53.554 13.194 - 13.292: 96.6819% ( 12) 00:07:53.554 13.292 - 13.391: 96.8871% ( 18) 00:07:53.554 13.391 - 13.489: 97.0468% ( 14) 00:07:53.554 13.489 - 13.588: 97.1722% ( 11) 00:07:53.554 13.588 - 13.686: 97.2292% ( 5) 00:07:53.554 13.686 - 13.785: 97.4800% ( 22) 00:07:53.554 13.785 - 13.883: 97.6283% ( 13) 00:07:53.554 13.883 - 13.982: 97.7537% ( 11) 00:07:53.554 13.982 - 14.080: 97.8677% ( 10) 00:07:53.554 14.080 - 14.178: 97.8905% ( 2) 00:07:53.554 14.178 - 14.277: 97.9361% ( 4) 00:07:53.554 14.277 - 14.375: 97.9818% ( 4) 00:07:53.554 14.375 - 14.474: 98.0160% ( 3) 00:07:53.554 14.474 - 14.572: 98.1072% ( 8) 00:07:53.554 14.572 - 14.671: 98.1414% ( 3) 00:07:53.554 14.769 - 14.868: 98.1870% ( 4) 00:07:53.554 14.868 - 14.966: 98.2440% ( 5) 00:07:53.554 14.966 - 15.065: 98.2782% ( 3) 00:07:53.554 15.065 - 15.163: 98.3010% ( 2) 00:07:53.554 15.163 - 15.262: 98.3238% ( 2) 00:07:53.554 15.262 - 15.360: 98.3466% ( 2) 00:07:53.554 15.360 - 15.458: 98.3580% ( 1) 00:07:53.554 15.458 - 15.557: 98.3922% ( 3) 00:07:53.554 15.557 - 15.655: 98.4379% ( 4) 00:07:53.554 15.655 - 15.754: 98.4493% ( 1) 00:07:53.554 15.852 - 15.951: 98.4949% ( 4) 00:07:53.554 16.049 - 16.148: 98.5063% ( 1) 00:07:53.554 16.148 - 16.246: 98.5177% ( 1) 00:07:53.554 16.345 - 16.443: 98.5519% ( 3) 00:07:53.554 16.443 - 16.542: 98.5747% ( 2) 00:07:53.554 16.542 - 16.640: 98.6887% ( 10) 00:07:53.554 16.640 - 16.738: 98.7457% ( 5) 00:07:53.554 16.738 - 16.837: 98.8712% ( 11) 00:07:53.554 16.837 - 16.935: 98.9852% ( 10) 00:07:53.554 16.935 - 17.034: 99.0194% ( 3) 00:07:53.554 17.034 - 17.132: 99.0536% ( 3) 00:07:53.554 17.132 - 17.231: 99.0764% ( 2) 00:07:53.554 17.231 - 17.329: 99.2132% ( 12) 00:07:53.554 17.329 - 17.428: 99.3044% ( 8) 00:07:53.554 17.428 - 17.526: 99.3957% ( 8) 00:07:53.554 17.526 - 17.625: 99.4641% ( 6) 00:07:53.554 17.723 - 17.822: 99.4983% ( 3) 00:07:53.554 17.822 - 17.920: 99.5553% ( 5) 00:07:53.554 17.920 - 18.018: 99.5781% ( 2) 00:07:53.554 18.018 - 18.117: 99.6009% ( 2) 00:07:53.554 18.117 - 18.215: 99.6123% ( 1) 00:07:53.554 18.215 - 18.314: 99.6465% ( 3) 00:07:53.554 18.314 - 18.412: 99.6579% ( 1) 00:07:53.554 18.412 - 18.511: 99.6807% ( 2) 00:07:53.554 18.511 - 18.609: 99.6921% ( 1) 00:07:53.554 18.609 - 18.708: 99.7035% ( 1) 00:07:53.554 18.905 - 19.003: 99.7149% ( 1) 00:07:53.554 19.200 - 19.298: 99.7263% ( 1) 00:07:53.554 19.298 - 19.397: 99.7377% ( 1) 00:07:53.554 19.397 - 19.495: 99.7491% ( 1) 00:07:53.554 19.988 - 20.086: 99.7605% ( 1) 00:07:53.554 20.185 - 20.283: 99.7719% ( 1) 00:07:53.554 20.480 - 20.578: 99.7834% ( 1) 00:07:53.554 20.775 - 20.874: 99.7948% ( 1) 00:07:53.554 21.268 - 21.366: 99.8062% ( 1) 00:07:53.554 21.366 - 21.465: 99.8176% ( 1) 00:07:53.554 21.760 - 21.858: 99.8290% ( 1) 00:07:53.554 21.957 - 22.055: 99.8404% ( 1) 00:07:53.554 22.252 - 22.351: 99.8518% ( 1) 00:07:53.554 22.449 - 22.548: 99.8632% ( 1) 00:07:53.554 22.548 - 22.646: 99.8746% ( 1) 00:07:53.554 22.745 - 22.843: 99.8860% ( 1) 00:07:53.554 25.206 - 25.403: 99.8974% ( 1) 00:07:53.554 25.403 - 25.600: 99.9088% ( 1) 00:07:53.554 25.600 - 25.797: 99.9202% ( 1) 00:07:53.554 25.994 - 26.191: 99.9316% ( 1) 00:07:53.554 28.160 - 28.357: 99.9430% ( 1) 00:07:53.554 47.262 - 47.458: 99.9544% ( 1) 00:07:53.554 51.988 - 52.382: 99.9658% ( 1) 00:07:53.554 57.895 - 58.289: 99.9772% ( 1) 00:07:53.554 68.529 - 68.923: 99.9886% ( 1) 00:07:53.554 89.403 - 89.797: 100.0000% ( 1) 00:07:53.554 00:07:53.554 Complete histogram 00:07:53.554 ================== 00:07:53.554 Range in us Cumulative Count 00:07:53.554 7.138 - 7.188: 0.0342% ( 3) 00:07:53.554 7.188 - 7.237: 0.5473% ( 45) 00:07:53.554 7.237 - 7.286: 5.6328% ( 446) 00:07:53.554 7.286 - 7.335: 22.5314% ( 1482) 00:07:53.554 7.335 - 7.385: 45.9635% ( 2055) 00:07:53.554 7.385 - 7.434: 65.7355% ( 1734) 00:07:53.554 7.434 - 7.483: 77.8449% ( 1062) 00:07:53.554 7.483 - 7.532: 85.0969% ( 636) 00:07:53.554 7.532 - 7.582: 89.6237% ( 397) 00:07:53.554 7.582 - 7.631: 92.7138% ( 271) 00:07:53.554 7.631 - 7.680: 94.3900% ( 147) 00:07:53.554 7.680 - 7.729: 95.1197% ( 64) 00:07:53.554 7.729 - 7.778: 95.6442% ( 46) 00:07:53.554 7.778 - 7.828: 96.1117% ( 41) 00:07:53.554 7.828 - 7.877: 96.3854% ( 24) 00:07:53.554 7.877 - 7.926: 96.6249% ( 21) 00:07:53.554 7.926 - 7.975: 96.7845% ( 14) 00:07:53.554 7.975 - 8.025: 96.8757% ( 8) 00:07:53.554 8.025 - 8.074: 97.0696% ( 17) 00:07:53.554 8.074 - 8.123: 97.3888% ( 28) 00:07:53.554 8.123 - 8.172: 97.6283% ( 21) 00:07:53.554 8.172 - 8.222: 97.8677% ( 21) 00:07:53.554 8.222 - 8.271: 98.0730% ( 18) 00:07:53.554 8.271 - 8.320: 98.1984% ( 11) 00:07:53.554 8.320 - 8.369: 98.3238% ( 11) 00:07:53.554 8.369 - 8.418: 98.3466% ( 2) 00:07:53.554 8.418 - 8.468: 98.3922% ( 4) 00:07:53.554 8.468 - 8.517: 98.4151% ( 2) 00:07:53.554 8.517 - 8.566: 98.4379% ( 2) 00:07:53.554 8.812 - 8.862: 98.4493% ( 1) 00:07:53.554 8.911 - 8.960: 98.4607% ( 1) 00:07:53.554 8.960 - 9.009: 98.4721% ( 1) 00:07:53.554 10.388 - 10.437: 98.4835% ( 1) 00:07:53.554 10.634 - 10.683: 98.4949% ( 1) 00:07:53.554 10.683 - 10.732: 98.5063% ( 1) 00:07:53.554 10.732 - 10.782: 98.5177% ( 1) 00:07:53.554 10.978 - 11.028: 98.5291% ( 1) 00:07:53.554 12.062 - 12.111: 98.5405% ( 1) 00:07:53.554 12.209 - 12.258: 98.5519% ( 1) 00:07:53.554 12.308 - 12.357: 98.5633% ( 1) 00:07:53.554 12.505 - 12.554: 98.5861% ( 2) 00:07:53.554 12.702 - 12.800: 98.5975% ( 1) 00:07:53.554 12.800 - 12.898: 98.6545% ( 5) 00:07:53.554 12.898 - 12.997: 98.7457% ( 8) 00:07:53.554 12.997 - 13.095: 98.8369% ( 8) 00:07:53.554 13.095 - 13.194: 98.8940% ( 5) 00:07:53.554 13.194 - 13.292: 99.0422% ( 13) 00:07:53.554 13.292 - 13.391: 99.1676% ( 11) 00:07:53.554 13.391 - 13.489: 99.2588% ( 8) 00:07:53.554 13.489 - 13.588: 99.3501% ( 8) 00:07:53.554 13.588 - 13.686: 99.4071% ( 5) 00:07:53.554 13.686 - 13.785: 99.4641% ( 5) 00:07:53.554 13.785 - 13.883: 99.5439% ( 7) 00:07:53.554 13.883 - 13.982: 99.5667% ( 2) 00:07:53.554 13.982 - 14.080: 99.6123% ( 4) 00:07:53.554 14.080 - 14.178: 99.6351% ( 2) 00:07:53.554 14.178 - 14.277: 99.6693% ( 3) 00:07:53.554 14.277 - 14.375: 99.6921% ( 2) 00:07:53.554 14.474 - 14.572: 99.7035% ( 1) 00:07:53.554 14.671 - 14.769: 99.7263% ( 2) 00:07:53.554 14.868 - 14.966: 99.7377% ( 1) 00:07:53.554 14.966 - 15.065: 99.7491% ( 1) 00:07:53.554 15.163 - 15.262: 99.7605% ( 1) 00:07:53.554 15.655 - 15.754: 99.7834% ( 2) 00:07:53.554 15.951 - 16.049: 99.7948% ( 1) 00:07:53.554 16.640 - 16.738: 99.8062% ( 1) 00:07:53.555 16.738 - 16.837: 99.8176% ( 1) 00:07:53.555 16.935 - 17.034: 99.8290% ( 1) 00:07:53.555 17.231 - 17.329: 99.8404% ( 1) 00:07:53.555 17.428 - 17.526: 99.8632% ( 2) 00:07:53.555 17.526 - 17.625: 99.8746% ( 1) 00:07:53.555 17.822 - 17.920: 99.8860% ( 1) 00:07:53.555 17.920 - 18.018: 99.8974% ( 1) 00:07:53.555 18.806 - 18.905: 99.9088% ( 1) 00:07:53.555 21.760 - 21.858: 99.9316% ( 2) 00:07:53.555 23.828 - 23.926: 99.9430% ( 1) 00:07:53.555 26.585 - 26.782: 99.9544% ( 1) 00:07:53.555 29.342 - 29.538: 99.9658% ( 1) 00:07:53.555 34.658 - 34.855: 99.9772% ( 1) 00:07:53.555 41.157 - 41.354: 99.9886% ( 1) 00:07:53.555 48.837 - 49.034: 100.0000% ( 1) 00:07:53.555 00:07:53.555 ************************************ 00:07:53.555 END TEST nvme_overhead 00:07:53.555 ************************************ 00:07:53.555 00:07:53.555 real 0m1.211s 00:07:53.555 user 0m1.062s 00:07:53.555 sys 0m0.099s 00:07:53.555 13:42:07 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.555 13:42:07 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:53.555 13:42:07 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:53.555 13:42:07 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:53.555 13:42:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.555 13:42:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.555 ************************************ 00:07:53.555 START TEST nvme_arbitration 00:07:53.555 ************************************ 00:07:53.555 13:42:07 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:56.852 Initializing NVMe Controllers 00:07:56.852 Attached to 0000:00:11.0 00:07:56.852 Attached to 0000:00:13.0 00:07:56.852 Attached to 0000:00:10.0 00:07:56.852 Attached to 0000:00:12.0 00:07:56.852 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:07:56.852 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:07:56.852 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:07:56.852 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:56.852 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:56.852 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:56.852 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:56.852 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:56.852 Initialization complete. Launching workers. 00:07:56.852 Starting thread on core 1 with urgent priority queue 00:07:56.852 Starting thread on core 2 with urgent priority queue 00:07:56.852 Starting thread on core 3 with urgent priority queue 00:07:56.852 Starting thread on core 0 with urgent priority queue 00:07:56.852 QEMU NVMe Ctrl (12341 ) core 0: 896.00 IO/s 111.61 secs/100000 ios 00:07:56.852 QEMU NVMe Ctrl (12342 ) core 0: 896.00 IO/s 111.61 secs/100000 ios 00:07:56.852 QEMU NVMe Ctrl (12343 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:56.852 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:56.852 QEMU NVMe Ctrl (12340 ) core 2: 917.33 IO/s 109.01 secs/100000 ios 00:07:56.852 QEMU NVMe Ctrl (12342 ) core 3: 938.67 IO/s 106.53 secs/100000 ios 00:07:56.852 ======================================================== 00:07:56.852 00:07:56.852 ************************************ 00:07:56.852 END TEST nvme_arbitration 00:07:56.852 ************************************ 00:07:56.852 00:07:56.852 real 0m3.316s 00:07:56.852 user 0m9.217s 00:07:56.852 sys 0m0.130s 00:07:56.852 13:42:10 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.852 13:42:10 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 13:42:10 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:56.852 13:42:10 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:56.852 13:42:10 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.852 13:42:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 ************************************ 00:07:56.852 START TEST nvme_single_aen 00:07:56.852 ************************************ 00:07:56.852 13:42:10 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:57.111 Asynchronous Event Request test 00:07:57.111 Attached to 0000:00:11.0 00:07:57.111 Attached to 0000:00:13.0 00:07:57.111 Attached to 0000:00:10.0 00:07:57.111 Attached to 0000:00:12.0 00:07:57.111 Reset controller to setup AER completions for this process 00:07:57.111 Registering asynchronous event callbacks... 00:07:57.111 Getting orig temperature thresholds of all controllers 00:07:57.111 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:57.111 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:57.111 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:57.111 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:57.111 Setting all controllers temperature threshold low to trigger AER 00:07:57.111 Waiting for all controllers temperature threshold to be set lower 00:07:57.111 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:57.111 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:57.111 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:57.111 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:57.111 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:57.111 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:57.111 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:57.111 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:57.111 Waiting for all controllers to trigger AER and reset threshold 00:07:57.111 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.111 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.111 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.111 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.111 Cleaning up... 00:07:57.111 ************************************ 00:07:57.111 END TEST nvme_single_aen 00:07:57.111 ************************************ 00:07:57.111 00:07:57.111 real 0m0.207s 00:07:57.111 user 0m0.068s 00:07:57.111 sys 0m0.095s 00:07:57.111 13:42:10 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.111 13:42:10 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:57.111 13:42:10 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:57.111 13:42:10 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.111 13:42:10 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.111 13:42:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.111 ************************************ 00:07:57.111 START TEST nvme_doorbell_aers 00:07:57.111 ************************************ 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:57.111 13:42:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:57.369 [2024-10-15 13:42:11.022675] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:07.344 Executing: test_write_invalid_db 00:08:07.344 Waiting for AER completion... 00:08:07.344 Failure: test_write_invalid_db 00:08:07.344 00:08:07.344 Executing: test_invalid_db_write_overflow_sq 00:08:07.344 Waiting for AER completion... 00:08:07.344 Failure: test_invalid_db_write_overflow_sq 00:08:07.344 00:08:07.344 Executing: test_invalid_db_write_overflow_cq 00:08:07.344 Waiting for AER completion... 00:08:07.344 Failure: test_invalid_db_write_overflow_cq 00:08:07.344 00:08:07.344 13:42:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:07.344 13:42:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:07.344 [2024-10-15 13:42:21.088885] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:17.386 Executing: test_write_invalid_db 00:08:17.386 Waiting for AER completion... 00:08:17.386 Failure: test_write_invalid_db 00:08:17.386 00:08:17.386 Executing: test_invalid_db_write_overflow_sq 00:08:17.386 Waiting for AER completion... 00:08:17.386 Failure: test_invalid_db_write_overflow_sq 00:08:17.386 00:08:17.386 Executing: test_invalid_db_write_overflow_cq 00:08:17.386 Waiting for AER completion... 00:08:17.386 Failure: test_invalid_db_write_overflow_cq 00:08:17.386 00:08:17.386 13:42:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:17.386 13:42:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:17.386 [2024-10-15 13:42:31.092463] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:27.385 Executing: test_write_invalid_db 00:08:27.385 Waiting for AER completion... 00:08:27.385 Failure: test_write_invalid_db 00:08:27.385 00:08:27.385 Executing: test_invalid_db_write_overflow_sq 00:08:27.385 Waiting for AER completion... 00:08:27.385 Failure: test_invalid_db_write_overflow_sq 00:08:27.385 00:08:27.385 Executing: test_invalid_db_write_overflow_cq 00:08:27.385 Waiting for AER completion... 00:08:27.385 Failure: test_invalid_db_write_overflow_cq 00:08:27.385 00:08:27.385 13:42:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:27.385 13:42:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:27.385 [2024-10-15 13:42:41.125117] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:37.374 Executing: test_write_invalid_db 00:08:37.374 Waiting for AER completion... 00:08:37.374 Failure: test_write_invalid_db 00:08:37.374 00:08:37.374 Executing: test_invalid_db_write_overflow_sq 00:08:37.374 Waiting for AER completion... 00:08:37.374 Failure: test_invalid_db_write_overflow_sq 00:08:37.374 00:08:37.374 Executing: test_invalid_db_write_overflow_cq 00:08:37.374 Waiting for AER completion... 00:08:37.375 Failure: test_invalid_db_write_overflow_cq 00:08:37.375 00:08:37.375 00:08:37.375 real 0m40.195s 00:08:37.375 user 0m34.049s 00:08:37.375 sys 0m5.755s 00:08:37.375 13:42:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.375 13:42:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:37.375 ************************************ 00:08:37.375 END TEST nvme_doorbell_aers 00:08:37.375 ************************************ 00:08:37.375 13:42:50 nvme -- nvme/nvme.sh@97 -- # uname 00:08:37.375 13:42:50 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:37.375 13:42:50 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:37.375 13:42:50 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:37.375 13:42:50 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.375 13:42:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.375 ************************************ 00:08:37.375 START TEST nvme_multi_aen 00:08:37.375 ************************************ 00:08:37.375 13:42:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:37.637 [2024-10-15 13:42:51.174008] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:37.637 [2024-10-15 13:42:51.174241] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:37.637 [2024-10-15 13:42:51.174313] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:37.637 [2024-10-15 13:42:51.175800] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:37.637 [2024-10-15 13:42:51.175928] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:37.637 [2024-10-15 13:42:51.175945] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:37.637 [2024-10-15 13:42:51.176985] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:37.637 [2024-10-15 13:42:51.177012] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:37.637 [2024-10-15 13:42:51.177021] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:37.637 [2024-10-15 13:42:51.178077] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:37.637 [2024-10-15 13:42:51.178176] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:37.637 [2024-10-15 13:42:51.178248] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63211) is not found. Dropping the request. 00:08:37.637 Child process pid: 63732 00:08:37.637 [Child] Asynchronous Event Request test 00:08:37.637 [Child] Attached to 0000:00:11.0 00:08:37.637 [Child] Attached to 0000:00:13.0 00:08:37.637 [Child] Attached to 0000:00:10.0 00:08:37.637 [Child] Attached to 0000:00:12.0 00:08:37.637 [Child] Registering asynchronous event callbacks... 00:08:37.637 [Child] Getting orig temperature thresholds of all controllers 00:08:37.637 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.637 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.637 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.637 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.637 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:37.637 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.637 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.637 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.637 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.637 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.637 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.637 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.637 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.637 [Child] Cleaning up... 00:08:37.898 Asynchronous Event Request test 00:08:37.898 Attached to 0000:00:11.0 00:08:37.898 Attached to 0000:00:13.0 00:08:37.898 Attached to 0000:00:10.0 00:08:37.898 Attached to 0000:00:12.0 00:08:37.898 Reset controller to setup AER completions for this process 00:08:37.898 Registering asynchronous event callbacks... 00:08:37.898 Getting orig temperature thresholds of all controllers 00:08:37.898 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.898 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.898 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.898 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.898 Setting all controllers temperature threshold low to trigger AER 00:08:37.898 Waiting for all controllers temperature threshold to be set lower 00:08:37.898 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.898 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:37.898 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.898 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:37.898 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.898 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:37.898 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.898 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:37.898 Waiting for all controllers to trigger AER and reset threshold 00:08:37.898 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.898 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.898 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.898 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.898 Cleaning up... 00:08:37.898 00:08:37.898 real 0m0.433s 00:08:37.898 user 0m0.124s 00:08:37.898 sys 0m0.202s 00:08:37.898 13:42:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.898 13:42:51 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:37.898 ************************************ 00:08:37.898 END TEST nvme_multi_aen 00:08:37.898 ************************************ 00:08:37.898 13:42:51 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:37.898 13:42:51 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:37.898 13:42:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.898 13:42:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.898 ************************************ 00:08:37.898 START TEST nvme_startup 00:08:37.898 ************************************ 00:08:37.898 13:42:51 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:37.898 Initializing NVMe Controllers 00:08:37.898 Attached to 0000:00:11.0 00:08:37.898 Attached to 0000:00:13.0 00:08:37.898 Attached to 0000:00:10.0 00:08:37.898 Attached to 0000:00:12.0 00:08:37.898 Initialization complete. 00:08:37.898 Time used:134265.141 (us). 00:08:37.898 ************************************ 00:08:37.898 END TEST nvme_startup 00:08:37.898 ************************************ 00:08:37.898 00:08:37.898 real 0m0.196s 00:08:37.898 user 0m0.057s 00:08:37.898 sys 0m0.093s 00:08:37.898 13:42:51 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.898 13:42:51 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:38.160 13:42:51 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:38.160 13:42:51 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.160 13:42:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.160 13:42:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.160 ************************************ 00:08:38.160 START TEST nvme_multi_secondary 00:08:38.160 ************************************ 00:08:38.160 13:42:51 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:38.160 13:42:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63788 00:08:38.160 13:42:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:38.160 13:42:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63789 00:08:38.160 13:42:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:38.160 13:42:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:41.464 Initializing NVMe Controllers 00:08:41.464 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:41.464 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:41.464 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:41.464 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:41.464 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:41.464 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:41.464 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:41.464 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:41.464 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:41.464 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:41.464 Initialization complete. Launching workers. 00:08:41.464 ======================================================== 00:08:41.464 Latency(us) 00:08:41.464 Device Information : IOPS MiB/s Average min max 00:08:41.464 PCIE (0000:00:11.0) NSID 1 from core 1: 4892.16 19.11 3270.07 1680.10 7580.29 00:08:41.464 PCIE (0000:00:13.0) NSID 1 from core 1: 4892.16 19.11 3270.24 1719.24 7571.89 00:08:41.464 PCIE (0000:00:10.0) NSID 1 from core 1: 4892.16 19.11 3269.20 1826.91 6928.47 00:08:41.464 PCIE (0000:00:12.0) NSID 1 from core 1: 4892.16 19.11 3270.46 1846.34 6722.17 00:08:41.464 PCIE (0000:00:12.0) NSID 2 from core 1: 4892.16 19.11 3270.74 1804.15 6924.00 00:08:41.464 PCIE (0000:00:12.0) NSID 3 from core 1: 4892.16 19.11 3270.88 1650.67 7605.76 00:08:41.464 ======================================================== 00:08:41.464 Total : 29352.99 114.66 3270.26 1650.67 7605.76 00:08:41.464 00:08:41.464 Initializing NVMe Controllers 00:08:41.464 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:41.464 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:41.464 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:41.464 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:41.464 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:41.464 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:41.464 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:41.464 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:41.464 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:41.464 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:41.464 Initialization complete. Launching workers. 00:08:41.464 ======================================================== 00:08:41.464 Latency(us) 00:08:41.464 Device Information : IOPS MiB/s Average min max 00:08:41.464 PCIE (0000:00:11.0) NSID 1 from core 2: 2031.51 7.94 7875.44 2159.60 14728.35 00:08:41.464 PCIE (0000:00:13.0) NSID 1 from core 2: 2031.51 7.94 7875.70 2208.14 14473.91 00:08:41.464 PCIE (0000:00:10.0) NSID 1 from core 2: 2031.51 7.94 7874.89 2081.17 16845.34 00:08:41.464 PCIE (0000:00:12.0) NSID 1 from core 2: 2031.51 7.94 7875.66 2015.61 16375.97 00:08:41.464 PCIE (0000:00:12.0) NSID 2 from core 2: 2031.51 7.94 7875.57 1906.64 14608.90 00:08:41.464 PCIE (0000:00:12.0) NSID 3 from core 2: 2031.51 7.94 7875.83 1633.72 14352.31 00:08:41.464 ======================================================== 00:08:41.464 Total : 12189.06 47.61 7875.52 1633.72 16845.34 00:08:41.464 00:08:41.464 13:42:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63788 00:08:43.381 Initializing NVMe Controllers 00:08:43.381 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:43.381 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:43.381 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:43.381 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:43.381 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:43.381 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:43.381 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:43.381 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:43.381 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:43.381 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:43.381 Initialization complete. Launching workers. 00:08:43.381 ======================================================== 00:08:43.381 Latency(us) 00:08:43.381 Device Information : IOPS MiB/s Average min max 00:08:43.381 PCIE (0000:00:11.0) NSID 1 from core 0: 7377.91 28.82 2168.22 814.51 8416.63 00:08:43.381 PCIE (0000:00:13.0) NSID 1 from core 0: 7377.91 28.82 2168.18 825.52 8754.02 00:08:43.381 PCIE (0000:00:10.0) NSID 1 from core 0: 7377.91 28.82 2167.21 837.18 9169.37 00:08:43.381 PCIE (0000:00:12.0) NSID 1 from core 0: 7377.91 28.82 2168.11 850.80 8622.19 00:08:43.381 PCIE (0000:00:12.0) NSID 2 from core 0: 7377.91 28.82 2168.08 728.03 8371.26 00:08:43.381 PCIE (0000:00:12.0) NSID 3 from core 0: 7377.91 28.82 2168.06 705.26 8262.40 00:08:43.381 ======================================================== 00:08:43.381 Total : 44267.45 172.92 2167.98 705.26 9169.37 00:08:43.381 00:08:43.381 13:42:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63789 00:08:43.381 13:42:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63858 00:08:43.381 13:42:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:43.381 13:42:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63859 00:08:43.381 13:42:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:43.381 13:42:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:46.691 Initializing NVMe Controllers 00:08:46.691 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:46.691 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:46.691 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:46.691 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:46.691 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:46.691 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:46.691 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:46.691 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:46.691 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:46.691 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:46.691 Initialization complete. Launching workers. 00:08:46.692 ======================================================== 00:08:46.692 Latency(us) 00:08:46.692 Device Information : IOPS MiB/s Average min max 00:08:46.692 PCIE (0000:00:11.0) NSID 1 from core 1: 5097.86 19.91 3138.10 1055.00 6428.24 00:08:46.692 PCIE (0000:00:13.0) NSID 1 from core 1: 5097.86 19.91 3138.17 1015.30 6622.00 00:08:46.692 PCIE (0000:00:10.0) NSID 1 from core 1: 5097.86 19.91 3137.20 962.35 7372.96 00:08:46.692 PCIE (0000:00:12.0) NSID 1 from core 1: 5097.86 19.91 3138.26 948.15 7656.78 00:08:46.692 PCIE (0000:00:12.0) NSID 2 from core 1: 5097.86 19.91 3138.61 1071.64 7136.08 00:08:46.692 PCIE (0000:00:12.0) NSID 3 from core 1: 5097.86 19.91 3138.60 1063.73 6583.96 00:08:46.692 ======================================================== 00:08:46.692 Total : 30587.17 119.48 3138.16 948.15 7656.78 00:08:46.692 00:08:46.692 Initializing NVMe Controllers 00:08:46.692 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:46.692 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:46.692 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:46.692 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:46.692 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:46.692 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:46.692 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:46.692 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:46.692 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:46.692 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:46.692 Initialization complete. Launching workers. 00:08:46.692 ======================================================== 00:08:46.692 Latency(us) 00:08:46.692 Device Information : IOPS MiB/s Average min max 00:08:46.692 PCIE (0000:00:11.0) NSID 1 from core 0: 5065.92 19.79 3157.88 1084.98 6579.80 00:08:46.692 PCIE (0000:00:13.0) NSID 1 from core 0: 5065.92 19.79 3158.15 1054.80 7204.54 00:08:46.692 PCIE (0000:00:10.0) NSID 1 from core 0: 5065.92 19.79 3157.06 1008.39 7096.63 00:08:46.692 PCIE (0000:00:12.0) NSID 1 from core 0: 5065.92 19.79 3158.12 973.21 6968.81 00:08:46.692 PCIE (0000:00:12.0) NSID 2 from core 0: 5065.92 19.79 3158.40 1053.49 7763.41 00:08:46.692 PCIE (0000:00:12.0) NSID 3 from core 0: 5065.92 19.79 3158.36 961.56 7364.06 00:08:46.692 ======================================================== 00:08:46.692 Total : 30395.52 118.73 3157.99 961.56 7763.41 00:08:46.692 00:08:48.612 Initializing NVMe Controllers 00:08:48.612 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:48.612 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:48.612 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:48.612 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:48.612 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:48.612 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:48.612 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:48.612 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:48.612 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:48.612 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:48.612 Initialization complete. Launching workers. 00:08:48.612 ======================================================== 00:08:48.612 Latency(us) 00:08:48.612 Device Information : IOPS MiB/s Average min max 00:08:48.612 PCIE (0000:00:11.0) NSID 1 from core 2: 3222.22 12.59 4965.18 1079.11 17222.37 00:08:48.612 PCIE (0000:00:13.0) NSID 1 from core 2: 3222.22 12.59 4965.14 1068.12 17803.09 00:08:48.612 PCIE (0000:00:10.0) NSID 1 from core 2: 3222.22 12.59 4963.00 1001.54 14107.22 00:08:48.612 PCIE (0000:00:12.0) NSID 1 from core 2: 3222.22 12.59 4965.07 907.66 15688.72 00:08:48.612 PCIE (0000:00:12.0) NSID 2 from core 2: 3222.22 12.59 4965.04 1059.72 16544.24 00:08:48.612 PCIE (0000:00:12.0) NSID 3 from core 2: 3222.22 12.59 4968.77 1081.76 13753.89 00:08:48.612 ======================================================== 00:08:48.612 Total : 19333.29 75.52 4965.37 907.66 17803.09 00:08:48.612 00:08:48.612 ************************************ 00:08:48.612 END TEST nvme_multi_secondary 00:08:48.612 ************************************ 00:08:48.612 13:43:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63858 00:08:48.612 13:43:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63859 00:08:48.612 00:08:48.612 real 0m10.686s 00:08:48.612 user 0m18.382s 00:08:48.612 sys 0m0.674s 00:08:48.612 13:43:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.612 13:43:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:48.874 13:43:02 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:48.874 13:43:02 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:48.874 13:43:02 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/62821 ]] 00:08:48.874 13:43:02 nvme -- common/autotest_common.sh@1090 -- # kill 62821 00:08:48.874 13:43:02 nvme -- common/autotest_common.sh@1091 -- # wait 62821 00:08:48.874 [2024-10-15 13:43:02.434911] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.435020] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.435064] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.435093] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.438198] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.438367] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.438383] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.438395] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.440658] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.440708] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.440720] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.440732] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.443058] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.443113] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.443125] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 [2024-10-15 13:43:02.443136] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:48.874 13:43:02 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:48.874 13:43:02 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:48.874 13:43:02 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:48.874 13:43:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:48.874 13:43:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.874 13:43:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.874 ************************************ 00:08:48.874 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:48.874 ************************************ 00:08:48.874 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:48.874 * Looking for test storage... 00:08:48.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:48.874 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:48.874 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:08:48.874 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:49.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.137 --rc genhtml_branch_coverage=1 00:08:49.137 --rc genhtml_function_coverage=1 00:08:49.137 --rc genhtml_legend=1 00:08:49.137 --rc geninfo_all_blocks=1 00:08:49.137 --rc geninfo_unexecuted_blocks=1 00:08:49.137 00:08:49.137 ' 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:49.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.137 --rc genhtml_branch_coverage=1 00:08:49.137 --rc genhtml_function_coverage=1 00:08:49.137 --rc genhtml_legend=1 00:08:49.137 --rc geninfo_all_blocks=1 00:08:49.137 --rc geninfo_unexecuted_blocks=1 00:08:49.137 00:08:49.137 ' 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:49.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.137 --rc genhtml_branch_coverage=1 00:08:49.137 --rc genhtml_function_coverage=1 00:08:49.137 --rc genhtml_legend=1 00:08:49.137 --rc geninfo_all_blocks=1 00:08:49.137 --rc geninfo_unexecuted_blocks=1 00:08:49.137 00:08:49.137 ' 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:49.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.137 --rc genhtml_branch_coverage=1 00:08:49.137 --rc genhtml_function_coverage=1 00:08:49.137 --rc genhtml_legend=1 00:08:49.137 --rc geninfo_all_blocks=1 00:08:49.137 --rc geninfo_unexecuted_blocks=1 00:08:49.137 00:08:49.137 ' 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:49.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64016 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64016 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 64016 ']' 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.137 13:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:49.137 [2024-10-15 13:43:02.860475] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:08:49.137 [2024-10-15 13:43:02.860612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64016 ] 00:08:49.399 [2024-10-15 13:43:03.025775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.399 [2024-10-15 13:43:03.166091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.399 [2024-10-15 13:43:03.166449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.399 [2024-10-15 13:43:03.166823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.399 [2024-10-15 13:43:03.166945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.421 nvme0n1 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_2cmMT.txt 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.421 true 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1728999783 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64045 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:50.421 13:43:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.340 [2024-10-15 13:43:05.916566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:52.340 [2024-10-15 13:43:05.916829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:52.340 [2024-10-15 13:43:05.916855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:52.340 [2024-10-15 13:43:05.916870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:52.340 [2024-10-15 13:43:05.918864] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:52.340 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64045 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64045 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64045 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_2cmMT.txt 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:52.340 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_2cmMT.txt 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64016 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 64016 ']' 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 64016 00:08:52.341 13:43:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:08:52.341 13:43:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.341 13:43:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64016 00:08:52.341 killing process with pid 64016 00:08:52.341 13:43:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:52.341 13:43:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:52.341 13:43:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64016' 00:08:52.341 13:43:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 64016 00:08:52.341 13:43:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 64016 00:08:54.259 13:43:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:54.259 13:43:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:54.259 ************************************ 00:08:54.259 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:54.259 ************************************ 00:08:54.259 00:08:54.259 real 0m4.954s 00:08:54.259 user 0m17.518s 00:08:54.259 sys 0m0.538s 00:08:54.259 13:43:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.259 13:43:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:54.259 13:43:07 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:54.259 13:43:07 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:54.259 13:43:07 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.259 13:43:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.259 13:43:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.259 ************************************ 00:08:54.259 START TEST nvme_fio 00:08:54.259 ************************************ 00:08:54.259 13:43:07 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:08:54.259 13:43:07 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:54.259 13:43:07 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:54.259 13:43:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:54.259 13:43:07 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:54.259 13:43:07 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:08:54.259 13:43:07 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:54.259 13:43:07 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:54.259 13:43:07 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:54.259 13:43:07 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:54.259 13:43:07 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:54.259 13:43:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:54.259 13:43:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:54.259 13:43:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:54.259 13:43:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:54.259 13:43:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:54.259 13:43:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:54.259 13:43:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:54.542 13:43:08 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:54.542 13:43:08 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:54.542 13:43:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:54.542 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:54.542 fio-3.35 00:08:54.542 Starting 1 thread 00:08:59.906 00:08:59.906 test: (groupid=0, jobs=1): err= 0: pid=64190: Tue Oct 15 13:43:13 2024 00:08:59.906 read: IOPS=17.8k, BW=69.7MiB/s (73.1MB/s)(139MiB/2001msec) 00:08:59.906 slat (nsec): min=4217, max=81664, avg=6194.07, stdev=3029.06 00:08:59.906 clat (usec): min=665, max=10302, avg=3568.20, stdev=1284.41 00:08:59.906 lat (usec): min=676, max=10352, avg=3574.40, stdev=1285.64 00:08:59.906 clat percentiles (usec): 00:08:59.906 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2638], 00:08:59.906 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2999], 60.00th=[ 3228], 00:08:59.906 | 70.00th=[ 3752], 80.00th=[ 4686], 90.00th=[ 5669], 95.00th=[ 6259], 00:08:59.906 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[ 8979], 99.95th=[ 9372], 00:08:59.906 | 99.99th=[10290] 00:08:59.906 bw ( KiB/s): min=70096, max=74992, per=100.00%, avg=72232.00, stdev=2506.94, samples=3 00:08:59.906 iops : min=17526, max=18748, avg=18058.67, stdev=625.88, samples=3 00:08:59.906 write: IOPS=17.8k, BW=69.7MiB/s (73.1MB/s)(139MiB/2001msec); 0 zone resets 00:08:59.906 slat (nsec): min=4280, max=78717, avg=6452.73, stdev=2997.89 00:08:59.906 clat (usec): min=570, max=10243, avg=3581.86, stdev=1275.37 00:08:59.906 lat (usec): min=583, max=10254, avg=3588.32, stdev=1276.56 00:08:59.906 clat percentiles (usec): 00:08:59.906 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:08:59.906 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 3032], 60.00th=[ 3228], 00:08:59.906 | 70.00th=[ 3785], 80.00th=[ 4686], 90.00th=[ 5669], 95.00th=[ 6259], 00:08:59.906 | 99.00th=[ 7308], 99.50th=[ 7635], 99.90th=[ 8979], 99.95th=[ 9372], 00:08:59.906 | 99.99th=[10159] 00:08:59.906 bw ( KiB/s): min=70480, max=74864, per=100.00%, avg=72178.67, stdev=2352.66, samples=3 00:08:59.906 iops : min=17620, max=18716, avg=18044.67, stdev=588.16, samples=3 00:08:59.906 lat (usec) : 750=0.01%, 1000=0.01% 00:08:59.906 lat (msec) : 2=0.43%, 4=72.05%, 10=27.49%, 20=0.02% 00:08:59.906 cpu : usr=98.65%, sys=0.30%, ctx=22, majf=0, minf=607 00:08:59.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:59.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:59.906 issued rwts: total=35692,35692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:59.906 00:08:59.906 Run status group 0 (all jobs): 00:08:59.906 READ: bw=69.7MiB/s (73.1MB/s), 69.7MiB/s-69.7MiB/s (73.1MB/s-73.1MB/s), io=139MiB (146MB), run=2001-2001msec 00:08:59.906 WRITE: bw=69.7MiB/s (73.1MB/s), 69.7MiB/s-69.7MiB/s (73.1MB/s-73.1MB/s), io=139MiB (146MB), run=2001-2001msec 00:08:59.906 ----------------------------------------------------- 00:08:59.906 Suppressions used: 00:08:59.906 count bytes template 00:08:59.906 1 32 /usr/src/fio/parse.c 00:08:59.906 1 8 libtcmalloc_minimal.so 00:08:59.906 ----------------------------------------------------- 00:08:59.906 00:08:59.906 13:43:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:59.906 13:43:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:59.906 13:43:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:59.906 13:43:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:00.168 13:43:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:00.168 13:43:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:00.431 13:43:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:00.431 13:43:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:00.431 13:43:13 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:00.431 13:43:13 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:00.431 13:43:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:00.431 13:43:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:00.431 13:43:13 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.431 13:43:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:00.431 13:43:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:00.431 13:43:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:00.431 13:43:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.431 13:43:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:00.431 13:43:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:00.431 13:43:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:00.431 13:43:14 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:00.431 13:43:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:00.431 13:43:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:00.431 13:43:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:00.431 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:00.431 fio-3.35 00:09:00.431 Starting 1 thread 00:09:07.020 00:09:07.020 test: (groupid=0, jobs=1): err= 0: pid=64250: Tue Oct 15 13:43:20 2024 00:09:07.020 read: IOPS=21.5k, BW=83.8MiB/s (87.9MB/s)(168MiB/2001msec) 00:09:07.020 slat (nsec): min=3347, max=64627, avg=5276.35, stdev=2535.92 00:09:07.020 clat (usec): min=164, max=8520, avg=2974.97, stdev=1020.46 00:09:07.020 lat (usec): min=168, max=8566, avg=2980.24, stdev=1021.70 00:09:07.020 clat percentiles (usec): 00:09:07.020 | 1.00th=[ 1909], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:07.020 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2704], 00:09:07.020 | 70.00th=[ 2835], 80.00th=[ 3228], 90.00th=[ 4555], 95.00th=[ 5538], 00:09:07.020 | 99.00th=[ 6652], 99.50th=[ 7046], 99.90th=[ 7963], 99.95th=[ 8160], 00:09:07.020 | 99.99th=[ 8455] 00:09:07.020 bw ( KiB/s): min=75072, max=93152, per=98.59%, avg=84618.67, stdev=9082.50, samples=3 00:09:07.020 iops : min=18768, max=23288, avg=21154.67, stdev=2270.62, samples=3 00:09:07.021 write: IOPS=21.3k, BW=83.2MiB/s (87.2MB/s)(166MiB/2001msec); 0 zone resets 00:09:07.021 slat (nsec): min=3444, max=67309, avg=5517.90, stdev=2448.78 00:09:07.021 clat (usec): min=148, max=8465, avg=2990.44, stdev=1024.91 00:09:07.021 lat (usec): min=152, max=8470, avg=2995.95, stdev=1026.15 00:09:07.021 clat percentiles (usec): 00:09:07.021 | 1.00th=[ 1942], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:07.021 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2704], 00:09:07.021 | 70.00th=[ 2835], 80.00th=[ 3228], 90.00th=[ 4555], 95.00th=[ 5538], 00:09:07.021 | 99.00th=[ 6587], 99.50th=[ 7111], 99.90th=[ 7898], 99.95th=[ 8094], 00:09:07.021 | 99.99th=[ 8356] 00:09:07.021 bw ( KiB/s): min=76080, max=92872, per=99.48%, avg=84730.67, stdev=8407.58, samples=3 00:09:07.021 iops : min=19020, max=23218, avg=21182.67, stdev=2101.89, samples=3 00:09:07.021 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:09:07.021 lat (msec) : 2=1.38%, 4=85.67%, 10=12.88% 00:09:07.021 cpu : usr=99.10%, sys=0.00%, ctx=6, majf=0, minf=607 00:09:07.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:07.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.021 issued rwts: total=42934,42609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.021 00:09:07.021 Run status group 0 (all jobs): 00:09:07.021 READ: bw=83.8MiB/s (87.9MB/s), 83.8MiB/s-83.8MiB/s (87.9MB/s-87.9MB/s), io=168MiB (176MB), run=2001-2001msec 00:09:07.021 WRITE: bw=83.2MiB/s (87.2MB/s), 83.2MiB/s-83.2MiB/s (87.2MB/s-87.2MB/s), io=166MiB (175MB), run=2001-2001msec 00:09:07.021 ----------------------------------------------------- 00:09:07.021 Suppressions used: 00:09:07.021 count bytes template 00:09:07.021 1 32 /usr/src/fio/parse.c 00:09:07.021 1 8 libtcmalloc_minimal.so 00:09:07.021 ----------------------------------------------------- 00:09:07.021 00:09:07.021 13:43:20 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:07.021 13:43:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:07.021 13:43:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:07.021 13:43:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:07.021 13:43:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:07.021 13:43:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:07.021 13:43:20 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:07.021 13:43:20 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:07.021 13:43:20 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:07.282 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:07.282 fio-3.35 00:09:07.282 Starting 1 thread 00:09:12.646 00:09:12.646 test: (groupid=0, jobs=1): err= 0: pid=64312: Tue Oct 15 13:43:25 2024 00:09:12.646 read: IOPS=14.1k, BW=54.9MiB/s (57.6MB/s)(110MiB/2001msec) 00:09:12.646 slat (nsec): min=4829, max=82209, avg=7201.35, stdev=3902.24 00:09:12.646 clat (usec): min=299, max=12113, avg=4519.78, stdev=1375.04 00:09:12.646 lat (usec): min=306, max=12120, avg=4526.98, stdev=1376.24 00:09:12.646 clat percentiles (usec): 00:09:12.646 | 1.00th=[ 2606], 5.00th=[ 2933], 10.00th=[ 3130], 20.00th=[ 3392], 00:09:12.646 | 30.00th=[ 3654], 40.00th=[ 3884], 50.00th=[ 4146], 60.00th=[ 4490], 00:09:12.646 | 70.00th=[ 5014], 80.00th=[ 5669], 90.00th=[ 6456], 95.00th=[ 7177], 00:09:12.646 | 99.00th=[ 8717], 99.50th=[ 9372], 99.90th=[10814], 99.95th=[10945], 00:09:12.646 | 99.99th=[11994] 00:09:12.646 bw ( KiB/s): min=55456, max=60982, per=100.00%, avg=57426.00, stdev=3085.57, samples=3 00:09:12.646 iops : min=13864, max=15245, avg=14356.33, stdev=771.10, samples=3 00:09:12.646 write: IOPS=14.1k, BW=54.9MiB/s (57.6MB/s)(110MiB/2001msec); 0 zone resets 00:09:12.646 slat (usec): min=5, max=126, avg= 7.54, stdev= 3.90 00:09:12.646 clat (usec): min=353, max=11973, avg=4541.05, stdev=1374.81 00:09:12.646 lat (usec): min=359, max=11980, avg=4548.59, stdev=1375.94 00:09:12.646 clat percentiles (usec): 00:09:12.646 | 1.00th=[ 2638], 5.00th=[ 2966], 10.00th=[ 3130], 20.00th=[ 3425], 00:09:12.646 | 30.00th=[ 3654], 40.00th=[ 3884], 50.00th=[ 4146], 60.00th=[ 4490], 00:09:12.646 | 70.00th=[ 5014], 80.00th=[ 5669], 90.00th=[ 6521], 95.00th=[ 7177], 00:09:12.646 | 99.00th=[ 8848], 99.50th=[ 9372], 99.90th=[10814], 99.95th=[11076], 00:09:12.646 | 99.99th=[11469] 00:09:12.646 bw ( KiB/s): min=55016, max=61309, per=100.00%, avg=57319.00, stdev=3469.14, samples=3 00:09:12.646 iops : min=13754, max=15327, avg=14329.67, stdev=867.14, samples=3 00:09:12.646 lat (usec) : 500=0.03%, 750=0.01%, 1000=0.01% 00:09:12.646 lat (msec) : 2=0.07%, 4=44.64%, 10=54.98%, 20=0.26% 00:09:12.646 cpu : usr=98.20%, sys=0.25%, ctx=21, majf=0, minf=607 00:09:12.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:12.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.646 issued rwts: total=28147,28144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.646 00:09:12.646 Run status group 0 (all jobs): 00:09:12.646 READ: bw=54.9MiB/s (57.6MB/s), 54.9MiB/s-54.9MiB/s (57.6MB/s-57.6MB/s), io=110MiB (115MB), run=2001-2001msec 00:09:12.646 WRITE: bw=54.9MiB/s (57.6MB/s), 54.9MiB/s-54.9MiB/s (57.6MB/s-57.6MB/s), io=110MiB (115MB), run=2001-2001msec 00:09:12.646 ----------------------------------------------------- 00:09:12.646 Suppressions used: 00:09:12.646 count bytes template 00:09:12.646 1 32 /usr/src/fio/parse.c 00:09:12.646 1 8 libtcmalloc_minimal.so 00:09:12.646 ----------------------------------------------------- 00:09:12.646 00:09:12.646 13:43:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:12.646 13:43:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:12.646 13:43:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:12.646 13:43:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:12.646 13:43:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:12.646 13:43:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:12.646 13:43:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:12.646 13:43:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:12.646 13:43:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:12.646 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:12.646 fio-3.35 00:09:12.646 Starting 1 thread 00:09:20.794 00:09:20.794 test: (groupid=0, jobs=1): err= 0: pid=64373: Tue Oct 15 13:43:33 2024 00:09:20.794 read: IOPS=15.7k, BW=61.2MiB/s (64.2MB/s)(123MiB/2001msec) 00:09:20.794 slat (usec): min=4, max=503, avg= 6.85, stdev= 4.30 00:09:20.794 clat (usec): min=483, max=17223, avg=4044.97, stdev=1329.63 00:09:20.794 lat (usec): min=488, max=17229, avg=4051.81, stdev=1330.83 00:09:20.794 clat percentiles (usec): 00:09:20.794 | 1.00th=[ 2442], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 3032], 00:09:20.794 | 30.00th=[ 3195], 40.00th=[ 3359], 50.00th=[ 3589], 60.00th=[ 3884], 00:09:20.794 | 70.00th=[ 4359], 80.00th=[ 5145], 90.00th=[ 5997], 95.00th=[ 6652], 00:09:20.794 | 99.00th=[ 8029], 99.50th=[ 8717], 99.90th=[11863], 99.95th=[14746], 00:09:20.794 | 99.99th=[16450] 00:09:20.794 bw ( KiB/s): min=54848, max=70928, per=98.18%, avg=61570.67, stdev=8357.49, samples=3 00:09:20.794 iops : min=13712, max=17732, avg=15392.67, stdev=2089.37, samples=3 00:09:20.794 write: IOPS=15.7k, BW=61.3MiB/s (64.3MB/s)(123MiB/2001msec); 0 zone resets 00:09:20.794 slat (nsec): min=4699, max=77389, avg=7135.87, stdev=3354.54 00:09:20.794 clat (usec): min=399, max=21398, avg=4086.99, stdev=1390.74 00:09:20.794 lat (usec): min=405, max=21404, avg=4094.12, stdev=1391.86 00:09:20.794 clat percentiles (usec): 00:09:20.794 | 1.00th=[ 2474], 5.00th=[ 2704], 10.00th=[ 2868], 20.00th=[ 3032], 00:09:20.794 | 30.00th=[ 3195], 40.00th=[ 3392], 50.00th=[ 3621], 60.00th=[ 3916], 00:09:20.794 | 70.00th=[ 4424], 80.00th=[ 5211], 90.00th=[ 5997], 95.00th=[ 6652], 00:09:20.794 | 99.00th=[ 8225], 99.50th=[ 8848], 99.90th=[14877], 99.95th=[17171], 00:09:20.794 | 99.99th=[20841] 00:09:20.794 bw ( KiB/s): min=54424, max=69696, per=97.40%, avg=61138.67, stdev=7800.97, samples=3 00:09:20.794 iops : min=13606, max=17424, avg=15284.67, stdev=1950.24, samples=3 00:09:20.794 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:09:20.794 lat (msec) : 2=0.08%, 4=62.28%, 10=37.36%, 20=0.21%, 50=0.02% 00:09:20.794 cpu : usr=98.70%, sys=0.10%, ctx=3, majf=0, minf=606 00:09:20.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:20.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.794 issued rwts: total=31371,31402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.794 00:09:20.794 Run status group 0 (all jobs): 00:09:20.794 READ: bw=61.2MiB/s (64.2MB/s), 61.2MiB/s-61.2MiB/s (64.2MB/s-64.2MB/s), io=123MiB (128MB), run=2001-2001msec 00:09:20.794 WRITE: bw=61.3MiB/s (64.3MB/s), 61.3MiB/s-61.3MiB/s (64.3MB/s-64.3MB/s), io=123MiB (129MB), run=2001-2001msec 00:09:20.794 ----------------------------------------------------- 00:09:20.794 Suppressions used: 00:09:20.794 count bytes template 00:09:20.794 1 32 /usr/src/fio/parse.c 00:09:20.794 1 8 libtcmalloc_minimal.so 00:09:20.794 ----------------------------------------------------- 00:09:20.794 00:09:20.794 13:43:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:20.794 13:43:34 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:20.794 00:09:20.794 real 0m26.531s 00:09:20.794 user 0m16.871s 00:09:20.794 sys 0m16.682s 00:09:20.794 ************************************ 00:09:20.794 END TEST nvme_fio 00:09:20.794 ************************************ 00:09:20.794 13:43:34 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.794 13:43:34 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:20.794 ************************************ 00:09:20.794 END TEST nvme 00:09:20.794 ************************************ 00:09:20.794 00:09:20.794 real 1m35.625s 00:09:20.794 user 3m37.479s 00:09:20.794 sys 0m27.223s 00:09:20.794 13:43:34 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.794 13:43:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.794 13:43:34 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:20.794 13:43:34 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:20.794 13:43:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.794 13:43:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.794 13:43:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.794 ************************************ 00:09:20.794 START TEST nvme_scc 00:09:20.794 ************************************ 00:09:20.794 13:43:34 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:20.794 * Looking for test storage... 00:09:20.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:20.794 13:43:34 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:20.794 13:43:34 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:20.794 13:43:34 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:20.794 13:43:34 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.794 13:43:34 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:20.795 13:43:34 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.795 13:43:34 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:20.795 13:43:34 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:20.795 13:43:34 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.795 13:43:34 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:20.795 13:43:34 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.795 13:43:34 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.795 13:43:34 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.795 13:43:34 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:20.795 13:43:34 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.795 13:43:34 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:20.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.795 --rc genhtml_branch_coverage=1 00:09:20.795 --rc genhtml_function_coverage=1 00:09:20.795 --rc genhtml_legend=1 00:09:20.795 --rc geninfo_all_blocks=1 00:09:20.795 --rc geninfo_unexecuted_blocks=1 00:09:20.795 00:09:20.795 ' 00:09:20.795 13:43:34 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:20.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.795 --rc genhtml_branch_coverage=1 00:09:20.795 --rc genhtml_function_coverage=1 00:09:20.795 --rc genhtml_legend=1 00:09:20.795 --rc geninfo_all_blocks=1 00:09:20.795 --rc geninfo_unexecuted_blocks=1 00:09:20.795 00:09:20.795 ' 00:09:20.795 13:43:34 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:20.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.795 --rc genhtml_branch_coverage=1 00:09:20.795 --rc genhtml_function_coverage=1 00:09:20.795 --rc genhtml_legend=1 00:09:20.795 --rc geninfo_all_blocks=1 00:09:20.795 --rc geninfo_unexecuted_blocks=1 00:09:20.795 00:09:20.795 ' 00:09:20.795 13:43:34 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:20.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.795 --rc genhtml_branch_coverage=1 00:09:20.795 --rc genhtml_function_coverage=1 00:09:20.795 --rc genhtml_legend=1 00:09:20.795 --rc geninfo_all_blocks=1 00:09:20.795 --rc geninfo_unexecuted_blocks=1 00:09:20.795 00:09:20.795 ' 00:09:20.795 13:43:34 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:20.795 13:43:34 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:20.795 13:43:34 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:20.795 13:43:34 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:20.795 13:43:34 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:20.795 13:43:34 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.795 13:43:34 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.795 13:43:34 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.795 13:43:34 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.795 13:43:34 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.795 13:43:34 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.795 13:43:34 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.795 13:43:34 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:20.795 13:43:34 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.795 13:43:34 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:20.795 13:43:34 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:20.795 13:43:34 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:20.795 13:43:34 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:20.795 13:43:34 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:20.795 13:43:34 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:20.795 13:43:34 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:20.795 13:43:34 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:20.795 13:43:34 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:20.795 13:43:34 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:20.795 13:43:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:20.795 13:43:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:20.795 13:43:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:20.795 13:43:34 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:21.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.314 Waiting for block devices as requested 00:09:21.314 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.314 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.314 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.573 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.853 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:26.853 13:43:40 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:26.853 13:43:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:26.853 13:43:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:26.853 13:43:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:26.853 13:43:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:26.854 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:26.855 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:26.857 13:43:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:26.857 13:43:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:26.857 13:43:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:26.857 13:43:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.857 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:26.861 13:43:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:26.861 13:43:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:26.861 13:43:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:26.861 13:43:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.862 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:26.863 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.864 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.865 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:26.866 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.867 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.868 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:26.869 13:43:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:26.869 13:43:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:26.869 13:43:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:26.869 13:43:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:26.869 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.870 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:26.871 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:26.872 13:43:40 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:26.872 13:43:40 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:26.872 13:43:40 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:26.872 13:43:40 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:26.872 13:43:40 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:27.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.003 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.003 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.003 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.003 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.003 13:43:41 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:28.003 13:43:41 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:28.003 13:43:41 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.003 13:43:41 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:28.003 ************************************ 00:09:28.003 START TEST nvme_simple_copy 00:09:28.003 ************************************ 00:09:28.003 13:43:41 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:28.261 Initializing NVMe Controllers 00:09:28.261 Attaching to 0000:00:10.0 00:09:28.261 Controller supports SCC. Attached to 0000:00:10.0 00:09:28.261 Namespace ID: 1 size: 6GB 00:09:28.261 Initialization complete. 00:09:28.261 00:09:28.261 Controller QEMU NVMe Ctrl (12340 ) 00:09:28.261 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:28.261 Namespace Block Size:4096 00:09:28.261 Writing LBAs 0 to 63 with Random Data 00:09:28.261 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:28.261 LBAs matching Written Data: 64 00:09:28.261 00:09:28.261 real 0m0.267s 00:09:28.261 user 0m0.098s 00:09:28.261 sys 0m0.066s 00:09:28.261 ************************************ 00:09:28.261 END TEST nvme_simple_copy 00:09:28.261 ************************************ 00:09:28.261 13:43:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.261 13:43:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:28.519 ************************************ 00:09:28.519 END TEST nvme_scc 00:09:28.519 ************************************ 00:09:28.519 00:09:28.519 real 0m7.833s 00:09:28.519 user 0m1.090s 00:09:28.519 sys 0m1.421s 00:09:28.520 13:43:42 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.520 13:43:42 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:28.520 13:43:42 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:28.520 13:43:42 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:28.520 13:43:42 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:28.520 13:43:42 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:28.520 13:43:42 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:28.520 13:43:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:28.520 13:43:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.520 13:43:42 -- common/autotest_common.sh@10 -- # set +x 00:09:28.520 ************************************ 00:09:28.520 START TEST nvme_fdp 00:09:28.520 ************************************ 00:09:28.520 13:43:42 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:28.520 * Looking for test storage... 00:09:28.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:28.520 13:43:42 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:28.520 13:43:42 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:28.520 13:43:42 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:28.520 13:43:42 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:28.520 13:43:42 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.520 13:43:42 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:28.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.520 --rc genhtml_branch_coverage=1 00:09:28.520 --rc genhtml_function_coverage=1 00:09:28.520 --rc genhtml_legend=1 00:09:28.520 --rc geninfo_all_blocks=1 00:09:28.520 --rc geninfo_unexecuted_blocks=1 00:09:28.520 00:09:28.520 ' 00:09:28.520 13:43:42 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:28.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.520 --rc genhtml_branch_coverage=1 00:09:28.520 --rc genhtml_function_coverage=1 00:09:28.520 --rc genhtml_legend=1 00:09:28.520 --rc geninfo_all_blocks=1 00:09:28.520 --rc geninfo_unexecuted_blocks=1 00:09:28.520 00:09:28.520 ' 00:09:28.520 13:43:42 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:28.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.520 --rc genhtml_branch_coverage=1 00:09:28.520 --rc genhtml_function_coverage=1 00:09:28.520 --rc genhtml_legend=1 00:09:28.520 --rc geninfo_all_blocks=1 00:09:28.520 --rc geninfo_unexecuted_blocks=1 00:09:28.520 00:09:28.520 ' 00:09:28.520 13:43:42 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:28.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.520 --rc genhtml_branch_coverage=1 00:09:28.520 --rc genhtml_function_coverage=1 00:09:28.520 --rc genhtml_legend=1 00:09:28.520 --rc geninfo_all_blocks=1 00:09:28.520 --rc geninfo_unexecuted_blocks=1 00:09:28.520 00:09:28.520 ' 00:09:28.520 13:43:42 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:28.520 13:43:42 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:28.520 13:43:42 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:28.520 13:43:42 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:28.520 13:43:42 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.520 13:43:42 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.520 13:43:42 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.520 13:43:42 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.520 13:43:42 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.520 13:43:42 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:28.520 13:43:42 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.520 13:43:42 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:28.520 13:43:42 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:28.520 13:43:42 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:28.520 13:43:42 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:28.520 13:43:42 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:28.520 13:43:42 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:28.520 13:43:42 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:28.520 13:43:42 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:28.520 13:43:42 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:28.520 13:43:42 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.520 13:43:42 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:29.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.087 Waiting for block devices as requested 00:09:29.087 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.087 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.345 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.345 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.614 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:34.614 13:43:48 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:34.614 13:43:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:34.614 13:43:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:34.614 13:43:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.614 13:43:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.614 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:34.615 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:34.616 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:34.617 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:34.618 13:43:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:34.618 13:43:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:34.618 13:43:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:34.619 13:43:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.619 13:43:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.619 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.620 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.621 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:34.622 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:34.623 13:43:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:34.623 13:43:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:34.623 13:43:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.623 13:43:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:34.623 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.624 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:34.625 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:34.626 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.627 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.628 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.629 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:34.892 13:43:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:34.892 13:43:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:34.892 13:43:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.892 13:43:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.892 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.893 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:34.894 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:34.895 13:43:48 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:34.895 13:43:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:34.896 13:43:48 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:34.896 13:43:48 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:34.896 13:43:48 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:34.896 13:43:48 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:35.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.720 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.720 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.720 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.720 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.977 13:43:49 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:35.977 13:43:49 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:35.977 13:43:49 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.977 13:43:49 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:35.977 ************************************ 00:09:35.977 START TEST nvme_flexible_data_placement 00:09:35.977 ************************************ 00:09:35.977 13:43:49 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:36.235 Initializing NVMe Controllers 00:09:36.235 Attaching to 0000:00:13.0 00:09:36.235 Controller supports FDP Attached to 0000:00:13.0 00:09:36.235 Namespace ID: 1 Endurance Group ID: 1 00:09:36.235 Initialization complete. 00:09:36.235 00:09:36.235 ================================== 00:09:36.235 == FDP tests for Namespace: #01 == 00:09:36.235 ================================== 00:09:36.235 00:09:36.235 Get Feature: FDP: 00:09:36.235 ================= 00:09:36.235 Enabled: Yes 00:09:36.235 FDP configuration Index: 0 00:09:36.235 00:09:36.235 FDP configurations log page 00:09:36.235 =========================== 00:09:36.235 Number of FDP configurations: 1 00:09:36.235 Version: 0 00:09:36.235 Size: 112 00:09:36.235 FDP Configuration Descriptor: 0 00:09:36.235 Descriptor Size: 96 00:09:36.235 Reclaim Group Identifier format: 2 00:09:36.235 FDP Volatile Write Cache: Not Present 00:09:36.235 FDP Configuration: Valid 00:09:36.235 Vendor Specific Size: 0 00:09:36.235 Number of Reclaim Groups: 2 00:09:36.235 Number of Recalim Unit Handles: 8 00:09:36.235 Max Placement Identifiers: 128 00:09:36.235 Number of Namespaces Suppprted: 256 00:09:36.235 Reclaim unit Nominal Size: 6000000 bytes 00:09:36.235 Estimated Reclaim Unit Time Limit: Not Reported 00:09:36.235 RUH Desc #000: RUH Type: Initially Isolated 00:09:36.235 RUH Desc #001: RUH Type: Initially Isolated 00:09:36.235 RUH Desc #002: RUH Type: Initially Isolated 00:09:36.235 RUH Desc #003: RUH Type: Initially Isolated 00:09:36.235 RUH Desc #004: RUH Type: Initially Isolated 00:09:36.235 RUH Desc #005: RUH Type: Initially Isolated 00:09:36.235 RUH Desc #006: RUH Type: Initially Isolated 00:09:36.235 RUH Desc #007: RUH Type: Initially Isolated 00:09:36.235 00:09:36.235 FDP reclaim unit handle usage log page 00:09:36.235 ====================================== 00:09:36.235 Number of Reclaim Unit Handles: 8 00:09:36.235 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:36.235 RUH Usage Desc #001: RUH Attributes: Unused 00:09:36.235 RUH Usage Desc #002: RUH Attributes: Unused 00:09:36.235 RUH Usage Desc #003: RUH Attributes: Unused 00:09:36.235 RUH Usage Desc #004: RUH Attributes: Unused 00:09:36.235 RUH Usage Desc #005: RUH Attributes: Unused 00:09:36.235 RUH Usage Desc #006: RUH Attributes: Unused 00:09:36.235 RUH Usage Desc #007: RUH Attributes: Unused 00:09:36.235 00:09:36.235 FDP statistics log page 00:09:36.235 ======================= 00:09:36.235 Host bytes with metadata written: 1002029056 00:09:36.235 Media bytes with metadata written: 1002246144 00:09:36.235 Media bytes erased: 0 00:09:36.235 00:09:36.235 FDP Reclaim unit handle status 00:09:36.235 ============================== 00:09:36.235 Number of RUHS descriptors: 2 00:09:36.235 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000464 00:09:36.235 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:36.235 00:09:36.235 FDP write on placement id: 0 success 00:09:36.235 00:09:36.235 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:36.235 00:09:36.235 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:36.235 00:09:36.235 Get Feature: FDP Events for Placement handle: #0 00:09:36.235 ======================== 00:09:36.235 Number of FDP Events: 6 00:09:36.235 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:36.235 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:36.235 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:36.235 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:36.235 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:36.235 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:36.235 00:09:36.235 FDP events log page 00:09:36.236 =================== 00:09:36.236 Number of FDP events: 1 00:09:36.236 FDP Event #0: 00:09:36.236 Event Type: RU Not Written to Capacity 00:09:36.236 Placement Identifier: Valid 00:09:36.236 NSID: Valid 00:09:36.236 Location: Valid 00:09:36.236 Placement Identifier: 0 00:09:36.236 Event Timestamp: 6 00:09:36.236 Namespace Identifier: 1 00:09:36.236 Reclaim Group Identifier: 0 00:09:36.236 Reclaim Unit Handle Identifier: 0 00:09:36.236 00:09:36.236 FDP test passed 00:09:36.236 00:09:36.236 real 0m0.229s 00:09:36.236 user 0m0.066s 00:09:36.236 sys 0m0.060s 00:09:36.236 ************************************ 00:09:36.236 END TEST nvme_flexible_data_placement 00:09:36.236 ************************************ 00:09:36.236 13:43:49 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.236 13:43:49 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:36.236 ************************************ 00:09:36.236 END TEST nvme_fdp 00:09:36.236 ************************************ 00:09:36.236 00:09:36.236 real 0m7.721s 00:09:36.236 user 0m1.052s 00:09:36.236 sys 0m1.371s 00:09:36.236 13:43:49 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.236 13:43:49 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:36.236 13:43:49 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:36.236 13:43:49 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:36.236 13:43:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:36.236 13:43:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.236 13:43:49 -- common/autotest_common.sh@10 -- # set +x 00:09:36.236 ************************************ 00:09:36.236 START TEST nvme_rpc 00:09:36.236 ************************************ 00:09:36.236 13:43:49 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:36.236 * Looking for test storage... 00:09:36.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:36.236 13:43:49 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:36.236 13:43:49 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:36.236 13:43:49 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.494 13:43:50 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:36.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.494 --rc genhtml_branch_coverage=1 00:09:36.494 --rc genhtml_function_coverage=1 00:09:36.494 --rc genhtml_legend=1 00:09:36.494 --rc geninfo_all_blocks=1 00:09:36.494 --rc geninfo_unexecuted_blocks=1 00:09:36.494 00:09:36.494 ' 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:36.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.494 --rc genhtml_branch_coverage=1 00:09:36.494 --rc genhtml_function_coverage=1 00:09:36.494 --rc genhtml_legend=1 00:09:36.494 --rc geninfo_all_blocks=1 00:09:36.494 --rc geninfo_unexecuted_blocks=1 00:09:36.494 00:09:36.494 ' 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:36.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.494 --rc genhtml_branch_coverage=1 00:09:36.494 --rc genhtml_function_coverage=1 00:09:36.494 --rc genhtml_legend=1 00:09:36.494 --rc geninfo_all_blocks=1 00:09:36.494 --rc geninfo_unexecuted_blocks=1 00:09:36.494 00:09:36.494 ' 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:36.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.494 --rc genhtml_branch_coverage=1 00:09:36.494 --rc genhtml_function_coverage=1 00:09:36.494 --rc genhtml_legend=1 00:09:36.494 --rc geninfo_all_blocks=1 00:09:36.494 --rc geninfo_unexecuted_blocks=1 00:09:36.494 00:09:36.494 ' 00:09:36.494 13:43:50 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.494 13:43:50 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:36.494 13:43:50 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:36.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.494 13:43:50 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:36.494 13:43:50 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65732 00:09:36.494 13:43:50 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:36.495 13:43:50 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:36.495 13:43:50 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65732 00:09:36.495 13:43:50 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 65732 ']' 00:09:36.495 13:43:50 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.495 13:43:50 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.495 13:43:50 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.495 13:43:50 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.495 13:43:50 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.495 [2024-10-15 13:43:50.229420] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:09:36.495 [2024-10-15 13:43:50.229600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65732 ] 00:09:36.752 [2024-10-15 13:43:50.396393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:37.010 [2024-10-15 13:43:50.546622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.010 [2024-10-15 13:43:50.546828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.577 13:43:51 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:37.577 13:43:51 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:37.577 13:43:51 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:37.834 Nvme0n1 00:09:37.834 13:43:51 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:37.834 13:43:51 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:37.834 request: 00:09:37.834 { 00:09:37.834 "bdev_name": "Nvme0n1", 00:09:37.834 "filename": "non_existing_file", 00:09:37.834 "method": "bdev_nvme_apply_firmware", 00:09:37.834 "req_id": 1 00:09:37.834 } 00:09:37.834 Got JSON-RPC error response 00:09:37.834 response: 00:09:37.834 { 00:09:37.834 "code": -32603, 00:09:37.834 "message": "open file failed." 00:09:37.834 } 00:09:37.834 13:43:51 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:37.834 13:43:51 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:37.834 13:43:51 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:38.091 13:43:51 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:38.091 13:43:51 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65732 00:09:38.091 13:43:51 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 65732 ']' 00:09:38.091 13:43:51 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 65732 00:09:38.091 13:43:51 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:38.091 13:43:51 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.091 13:43:51 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65732 00:09:38.091 killing process with pid 65732 00:09:38.091 13:43:51 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.091 13:43:51 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.091 13:43:51 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65732' 00:09:38.091 13:43:51 nvme_rpc -- common/autotest_common.sh@969 -- # kill 65732 00:09:38.091 13:43:51 nvme_rpc -- common/autotest_common.sh@974 -- # wait 65732 00:09:39.990 00:09:39.990 real 0m3.503s 00:09:39.990 user 0m6.578s 00:09:39.990 sys 0m0.531s 00:09:39.990 13:43:53 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.990 ************************************ 00:09:39.990 END TEST nvme_rpc 00:09:39.990 13:43:53 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.990 ************************************ 00:09:39.990 13:43:53 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:39.990 13:43:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.990 13:43:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.990 13:43:53 -- common/autotest_common.sh@10 -- # set +x 00:09:39.990 ************************************ 00:09:39.990 START TEST nvme_rpc_timeouts 00:09:39.990 ************************************ 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:39.990 * Looking for test storage... 00:09:39.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.990 13:43:53 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:39.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.990 --rc genhtml_branch_coverage=1 00:09:39.990 --rc genhtml_function_coverage=1 00:09:39.990 --rc genhtml_legend=1 00:09:39.990 --rc geninfo_all_blocks=1 00:09:39.990 --rc geninfo_unexecuted_blocks=1 00:09:39.990 00:09:39.990 ' 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:39.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.990 --rc genhtml_branch_coverage=1 00:09:39.990 --rc genhtml_function_coverage=1 00:09:39.990 --rc genhtml_legend=1 00:09:39.990 --rc geninfo_all_blocks=1 00:09:39.990 --rc geninfo_unexecuted_blocks=1 00:09:39.990 00:09:39.990 ' 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:39.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.990 --rc genhtml_branch_coverage=1 00:09:39.990 --rc genhtml_function_coverage=1 00:09:39.990 --rc genhtml_legend=1 00:09:39.990 --rc geninfo_all_blocks=1 00:09:39.990 --rc geninfo_unexecuted_blocks=1 00:09:39.990 00:09:39.990 ' 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:39.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.990 --rc genhtml_branch_coverage=1 00:09:39.990 --rc genhtml_function_coverage=1 00:09:39.990 --rc genhtml_legend=1 00:09:39.990 --rc geninfo_all_blocks=1 00:09:39.990 --rc geninfo_unexecuted_blocks=1 00:09:39.990 00:09:39.990 ' 00:09:39.990 13:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.990 13:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65803 00:09:39.990 13:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65803 00:09:39.990 13:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65835 00:09:39.990 13:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:39.990 13:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65835 00:09:39.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 65835 ']' 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.990 13:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:39.990 13:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:39.990 [2024-10-15 13:43:53.726729] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:09:39.990 [2024-10-15 13:43:53.726868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65835 ] 00:09:40.248 [2024-10-15 13:43:53.877358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:40.248 [2024-10-15 13:43:53.996422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.248 [2024-10-15 13:43:53.996502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.182 Checking default timeout settings: 00:09:41.182 13:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.182 13:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:41.182 13:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:41.182 13:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:41.439 Making settings changes with rpc: 00:09:41.439 13:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:41.439 13:43:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:41.439 Check default vs. modified settings: 00:09:41.439 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:41.439 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65803 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65803 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:42.009 Setting action_on_timeout is changed as expected. 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65803 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65803 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.009 Setting timeout_us is changed as expected. 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65803 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65803 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:42.009 Setting timeout_admin_us is changed as expected. 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65803 /tmp/settings_modified_65803 00:09:42.009 13:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65835 00:09:42.009 13:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 65835 ']' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 65835 00:09:42.009 13:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:42.009 13:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65835 00:09:42.009 killing process with pid 65835 00:09:42.009 13:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.009 13:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65835' 00:09:42.009 13:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 65835 00:09:42.009 13:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 65835 00:09:43.912 RPC TIMEOUT SETTING TEST PASSED. 00:09:43.912 13:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:43.912 ************************************ 00:09:43.912 END TEST nvme_rpc_timeouts 00:09:43.912 ************************************ 00:09:43.912 00:09:43.912 real 0m3.721s 00:09:43.912 user 0m7.148s 00:09:43.912 sys 0m0.566s 00:09:43.912 13:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.912 13:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:43.912 13:43:57 -- spdk/autotest.sh@239 -- # uname -s 00:09:43.912 13:43:57 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:43.912 13:43:57 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:43.912 13:43:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:43.912 13:43:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.912 13:43:57 -- common/autotest_common.sh@10 -- # set +x 00:09:43.912 ************************************ 00:09:43.912 START TEST sw_hotplug 00:09:43.912 ************************************ 00:09:43.912 13:43:57 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:43.912 * Looking for test storage... 00:09:43.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:43.912 13:43:57 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:43.912 13:43:57 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:09:43.912 13:43:57 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:43.912 13:43:57 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.912 13:43:57 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:43.912 13:43:57 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.912 13:43:57 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:43.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.912 --rc genhtml_branch_coverage=1 00:09:43.912 --rc genhtml_function_coverage=1 00:09:43.912 --rc genhtml_legend=1 00:09:43.912 --rc geninfo_all_blocks=1 00:09:43.912 --rc geninfo_unexecuted_blocks=1 00:09:43.912 00:09:43.912 ' 00:09:43.912 13:43:57 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:43.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.912 --rc genhtml_branch_coverage=1 00:09:43.912 --rc genhtml_function_coverage=1 00:09:43.912 --rc genhtml_legend=1 00:09:43.912 --rc geninfo_all_blocks=1 00:09:43.912 --rc geninfo_unexecuted_blocks=1 00:09:43.912 00:09:43.912 ' 00:09:43.912 13:43:57 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:43.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.912 --rc genhtml_branch_coverage=1 00:09:43.912 --rc genhtml_function_coverage=1 00:09:43.912 --rc genhtml_legend=1 00:09:43.912 --rc geninfo_all_blocks=1 00:09:43.912 --rc geninfo_unexecuted_blocks=1 00:09:43.912 00:09:43.912 ' 00:09:43.912 13:43:57 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:43.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.912 --rc genhtml_branch_coverage=1 00:09:43.912 --rc genhtml_function_coverage=1 00:09:43.912 --rc genhtml_legend=1 00:09:43.912 --rc geninfo_all_blocks=1 00:09:43.912 --rc geninfo_unexecuted_blocks=1 00:09:43.912 00:09:43.912 ' 00:09:43.912 13:43:57 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:44.169 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.169 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:44.170 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:44.170 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:44.170 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:44.170 13:43:57 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:44.170 13:43:57 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:44.170 13:43:57 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:44.170 13:43:57 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:44.170 13:43:57 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:44.170 13:43:57 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:44.170 13:43:57 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:44.170 13:43:57 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:44.735 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.735 Waiting for block devices as requested 00:09:44.735 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.735 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.994 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.994 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.256 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:50.256 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:50.256 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:50.514 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:50.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.514 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:50.772 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:51.030 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.030 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.288 13:44:04 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:51.288 13:44:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:51.288 13:44:04 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:51.288 13:44:04 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:51.288 13:44:04 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66696 00:09:51.288 13:44:04 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:51.288 13:44:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:51.288 13:44:04 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:51.288 13:44:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:51.288 13:44:04 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:51.288 13:44:04 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:51.288 13:44:04 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:51.288 13:44:04 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:51.288 13:44:04 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:51.288 13:44:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:51.288 13:44:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:51.288 13:44:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:51.288 13:44:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:51.288 13:44:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:51.546 Initializing NVMe Controllers 00:09:51.546 Attaching to 0000:00:10.0 00:09:51.546 Attaching to 0000:00:11.0 00:09:51.546 Attached to 0000:00:10.0 00:09:51.546 Attached to 0000:00:11.0 00:09:51.546 Initialization complete. Starting I/O... 00:09:51.546 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:51.546 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:51.546 00:09:52.479 QEMU NVMe Ctrl (12340 ): 2472 I/Os completed (+2472) 00:09:52.479 QEMU NVMe Ctrl (12341 ): 2472 I/Os completed (+2472) 00:09:52.479 00:09:53.413 QEMU NVMe Ctrl (12340 ): 5451 I/Os completed (+2979) 00:09:53.413 QEMU NVMe Ctrl (12341 ): 5607 I/Os completed (+3135) 00:09:53.413 00:09:54.794 QEMU NVMe Ctrl (12340 ): 8505 I/Os completed (+3054) 00:09:54.794 QEMU NVMe Ctrl (12341 ): 8998 I/Os completed (+3391) 00:09:54.794 00:09:55.363 QEMU NVMe Ctrl (12340 ): 11573 I/Os completed (+3068) 00:09:55.363 QEMU NVMe Ctrl (12341 ): 12320 I/Os completed (+3322) 00:09:55.363 00:09:56.735 QEMU NVMe Ctrl (12340 ): 14473 I/Os completed (+2900) 00:09:56.735 QEMU NVMe Ctrl (12341 ): 15468 I/Os completed (+3148) 00:09:56.735 00:09:57.302 13:44:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:57.302 13:44:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.302 13:44:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.302 [2024-10-15 13:44:10.952919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:57.302 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:57.302 [2024-10-15 13:44:10.954148] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 [2024-10-15 13:44:10.954205] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 [2024-10-15 13:44:10.954235] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 [2024-10-15 13:44:10.954255] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:57.302 [2024-10-15 13:44:10.956548] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 [2024-10-15 13:44:10.956599] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 [2024-10-15 13:44:10.956614] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 [2024-10-15 13:44:10.956632] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 13:44:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.302 13:44:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.302 [2024-10-15 13:44:10.977954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:57.302 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:57.302 [2024-10-15 13:44:10.979066] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 [2024-10-15 13:44:10.979113] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 [2024-10-15 13:44:10.979136] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 [2024-10-15 13:44:10.979155] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:57.302 [2024-10-15 13:44:10.983005] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 [2024-10-15 13:44:10.983044] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 [2024-10-15 13:44:10.983063] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 [2024-10-15 13:44:10.983078] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.302 13:44:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:57.302 13:44:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:57.302 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:57.302 EAL: Scan for (pci) bus failed. 00:09:57.560 13:44:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.560 13:44:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.560 13:44:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:57.560 00:09:57.560 13:44:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:57.560 13:44:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.560 13:44:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.560 13:44:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.560 13:44:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:57.560 Attaching to 0000:00:10.0 00:09:57.560 Attached to 0000:00:10.0 00:09:57.560 13:44:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:57.560 13:44:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.560 13:44:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:57.560 Attaching to 0000:00:11.0 00:09:57.560 Attached to 0000:00:11.0 00:09:58.493 QEMU NVMe Ctrl (12340 ): 2695 I/Os completed (+2695) 00:09:58.493 QEMU NVMe Ctrl (12341 ): 2591 I/Os completed (+2591) 00:09:58.493 00:09:59.427 QEMU NVMe Ctrl (12340 ): 5551 I/Os completed (+2856) 00:09:59.427 QEMU NVMe Ctrl (12341 ): 5752 I/Os completed (+3161) 00:09:59.427 00:10:00.366 QEMU NVMe Ctrl (12340 ): 8416 I/Os completed (+2865) 00:10:00.366 QEMU NVMe Ctrl (12341 ): 8637 I/Os completed (+2885) 00:10:00.366 00:10:01.747 QEMU NVMe Ctrl (12340 ): 11105 I/Os completed (+2689) 00:10:01.747 QEMU NVMe Ctrl (12341 ): 11345 I/Os completed (+2708) 00:10:01.747 00:10:02.682 QEMU NVMe Ctrl (12340 ): 14086 I/Os completed (+2981) 00:10:02.682 QEMU NVMe Ctrl (12341 ): 14446 I/Os completed (+3101) 00:10:02.682 00:10:03.622 QEMU NVMe Ctrl (12340 ): 17028 I/Os completed (+2942) 00:10:03.622 QEMU NVMe Ctrl (12341 ): 17480 I/Os completed (+3034) 00:10:03.622 00:10:04.555 QEMU NVMe Ctrl (12340 ): 19922 I/Os completed (+2894) 00:10:04.555 QEMU NVMe Ctrl (12341 ): 20668 I/Os completed (+3188) 00:10:04.555 00:10:05.486 QEMU NVMe Ctrl (12340 ): 22886 I/Os completed (+2964) 00:10:05.487 QEMU NVMe Ctrl (12341 ): 23809 I/Os completed (+3141) 00:10:05.487 00:10:06.430 QEMU NVMe Ctrl (12340 ): 25844 I/Os completed (+2958) 00:10:06.430 QEMU NVMe Ctrl (12341 ): 27039 I/Os completed (+3230) 00:10:06.430 00:10:07.361 QEMU NVMe Ctrl (12340 ): 28875 I/Os completed (+3031) 00:10:07.361 QEMU NVMe Ctrl (12341 ): 30585 I/Os completed (+3546) 00:10:07.361 00:10:08.768 QEMU NVMe Ctrl (12340 ): 31881 I/Os completed (+3006) 00:10:08.768 QEMU NVMe Ctrl (12341 ): 33879 I/Os completed (+3294) 00:10:08.768 00:10:09.702 QEMU NVMe Ctrl (12340 ): 34919 I/Os completed (+3038) 00:10:09.702 QEMU NVMe Ctrl (12341 ): 37062 I/Os completed (+3183) 00:10:09.702 00:10:09.702 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:09.702 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:09.702 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.702 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.702 [2024-10-15 13:44:23.322126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:09.702 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:09.702 [2024-10-15 13:44:23.323174] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 [2024-10-15 13:44:23.323232] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 [2024-10-15 13:44:23.323249] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 [2024-10-15 13:44:23.323265] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:09.702 [2024-10-15 13:44:23.324949] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 [2024-10-15 13:44:23.324997] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 [2024-10-15 13:44:23.325009] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 [2024-10-15 13:44:23.325022] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.702 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.702 [2024-10-15 13:44:23.345022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:09.702 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:09.702 [2024-10-15 13:44:23.345953] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 [2024-10-15 13:44:23.345992] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 [2024-10-15 13:44:23.346013] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 [2024-10-15 13:44:23.346027] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:09.702 [2024-10-15 13:44:23.347519] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 [2024-10-15 13:44:23.347553] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 [2024-10-15 13:44:23.347567] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 [2024-10-15 13:44:23.347581] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.702 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:09.702 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:09.702 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:09.702 EAL: Scan for (pci) bus failed. 00:10:09.702 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.702 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.702 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:09.960 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:09.960 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.960 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.960 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.960 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:09.960 Attaching to 0000:00:10.0 00:10:09.960 Attached to 0000:00:10.0 00:10:09.960 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:09.960 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.960 13:44:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:09.960 Attaching to 0000:00:11.0 00:10:09.960 Attached to 0000:00:11.0 00:10:10.525 QEMU NVMe Ctrl (12340 ): 1977 I/Os completed (+1977) 00:10:10.525 QEMU NVMe Ctrl (12341 ): 1614 I/Os completed (+1614) 00:10:10.525 00:10:11.465 QEMU NVMe Ctrl (12340 ): 5256 I/Os completed (+3279) 00:10:11.465 QEMU NVMe Ctrl (12341 ): 4958 I/Os completed (+3344) 00:10:11.465 00:10:12.396 QEMU NVMe Ctrl (12340 ): 8223 I/Os completed (+2967) 00:10:12.396 QEMU NVMe Ctrl (12341 ): 8151 I/Os completed (+3193) 00:10:12.396 00:10:13.796 QEMU NVMe Ctrl (12340 ): 11083 I/Os completed (+2860) 00:10:13.796 QEMU NVMe Ctrl (12341 ): 11070 I/Os completed (+2919) 00:10:13.796 00:10:14.362 QEMU NVMe Ctrl (12340 ): 14109 I/Os completed (+3026) 00:10:14.362 QEMU NVMe Ctrl (12341 ): 14139 I/Os completed (+3069) 00:10:14.362 00:10:15.735 QEMU NVMe Ctrl (12340 ): 17437 I/Os completed (+3328) 00:10:15.735 QEMU NVMe Ctrl (12341 ): 17495 I/Os completed (+3356) 00:10:15.735 00:10:16.667 QEMU NVMe Ctrl (12340 ): 20793 I/Os completed (+3356) 00:10:16.667 QEMU NVMe Ctrl (12341 ): 20882 I/Os completed (+3387) 00:10:16.667 00:10:17.601 QEMU NVMe Ctrl (12340 ): 23806 I/Os completed (+3013) 00:10:17.601 QEMU NVMe Ctrl (12341 ): 23995 I/Os completed (+3113) 00:10:17.601 00:10:18.535 QEMU NVMe Ctrl (12340 ): 26498 I/Os completed (+2692) 00:10:18.535 QEMU NVMe Ctrl (12341 ): 26828 I/Os completed (+2833) 00:10:18.535 00:10:19.469 QEMU NVMe Ctrl (12340 ): 29783 I/Os completed (+3285) 00:10:19.469 QEMU NVMe Ctrl (12341 ): 30355 I/Os completed (+3527) 00:10:19.469 00:10:20.438 QEMU NVMe Ctrl (12340 ): 32709 I/Os completed (+2926) 00:10:20.438 QEMU NVMe Ctrl (12341 ): 33661 I/Os completed (+3306) 00:10:20.438 00:10:21.370 QEMU NVMe Ctrl (12340 ): 35803 I/Os completed (+3094) 00:10:21.370 QEMU NVMe Ctrl (12341 ): 37134 I/Os completed (+3473) 00:10:21.370 00:10:21.937 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:21.937 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:21.937 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.937 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.937 [2024-10-15 13:44:35.660589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:21.937 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:21.937 [2024-10-15 13:44:35.661767] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 [2024-10-15 13:44:35.661821] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 [2024-10-15 13:44:35.661838] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 [2024-10-15 13:44:35.661856] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:21.937 [2024-10-15 13:44:35.663668] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 [2024-10-15 13:44:35.663721] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 [2024-10-15 13:44:35.663734] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 [2024-10-15 13:44:35.663749] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.937 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.937 [2024-10-15 13:44:35.685740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:21.937 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:21.937 [2024-10-15 13:44:35.688867] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 [2024-10-15 13:44:35.688927] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 [2024-10-15 13:44:35.688946] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 [2024-10-15 13:44:35.688961] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:21.937 [2024-10-15 13:44:35.690485] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 [2024-10-15 13:44:35.690529] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 [2024-10-15 13:44:35.690545] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 [2024-10-15 13:44:35.690559] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.937 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:21.937 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:21.937 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:21.937 EAL: Scan for (pci) bus failed. 00:10:22.195 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.195 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.195 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:22.195 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:22.195 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.195 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.195 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.195 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:22.195 Attaching to 0000:00:10.0 00:10:22.195 Attached to 0000:00:10.0 00:10:22.195 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:22.454 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.454 13:44:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:22.454 Attaching to 0000:00:11.0 00:10:22.454 Attached to 0000:00:11.0 00:10:22.454 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:22.454 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:22.454 [2024-10-15 13:44:36.001366] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:34.648 13:44:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:34.648 13:44:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:34.648 13:44:47 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.05 00:10:34.648 13:44:47 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.05 00:10:34.648 13:44:47 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:34.648 13:44:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.05 00:10:34.648 13:44:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.05 2 00:10:34.648 remove_attach_helper took 43.05s to complete (handling 2 nvme drive(s)) 13:44:48 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:41.288 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66696 00:10:41.288 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66696) - No such process 00:10:41.288 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66696 00:10:41.288 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:41.288 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:41.288 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:41.288 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67240 00:10:41.288 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:41.288 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67240 00:10:41.288 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 67240 ']' 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.288 [2024-10-15 13:44:54.078099] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:10:41.288 [2024-10-15 13:44:54.078210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67240 ] 00:10:41.288 [2024-10-15 13:44:54.222420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.288 [2024-10-15 13:44:54.309685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:41.288 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.288 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:41.288 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:41.288 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:41.288 13:44:54 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:41.289 13:44:54 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:41.289 13:44:54 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:41.289 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:41.289 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:41.289 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:41.289 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:41.289 13:44:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:47.842 13:45:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:47.842 13:45:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.842 13:45:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.842 13:45:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.842 13:45:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.842 13:45:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:47.842 [2024-10-15 13:45:01.081569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:47.842 [2024-10-15 13:45:01.083180] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.842 [2024-10-15 13:45:01.083234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.842 [2024-10-15 13:45:01.083248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.842 [2024-10-15 13:45:01.083273] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.842 [2024-10-15 13:45:01.083282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.842 [2024-10-15 13:45:01.083291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.842 [2024-10-15 13:45:01.083299] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.842 [2024-10-15 13:45:01.083308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.842 [2024-10-15 13:45:01.083314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.842 [2024-10-15 13:45:01.083327] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.842 [2024-10-15 13:45:01.083334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.842 [2024-10-15 13:45:01.083342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.842 13:45:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.842 13:45:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.842 13:45:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.842 [2024-10-15 13:45:01.581565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:47.842 [2024-10-15 13:45:01.583140] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:47.842 [2024-10-15 13:45:01.583176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.842 [2024-10-15 13:45:01.583191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.842 [2024-10-15 13:45:01.583210] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.842 [2024-10-15 13:45:01.583231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.842 [2024-10-15 13:45:01.583239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.842 [2024-10-15 13:45:01.583249] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.842 [2024-10-15 13:45:01.583257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.842 [2024-10-15 13:45:01.583265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.842 [2024-10-15 13:45:01.583273] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.842 [2024-10-15 13:45:01.583282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.842 [2024-10-15 13:45:01.583289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.842 13:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:48.408 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:48.408 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:48.408 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:48.408 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:48.408 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:48.408 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:48.408 13:45:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.408 13:45:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:48.408 13:45:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.408 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:48.408 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:48.698 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.698 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.698 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:48.698 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:48.698 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.698 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.698 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.698 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:48.698 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:48.698 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.698 13:45:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.915 13:45:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.915 13:45:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.915 13:45:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.915 13:45:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.915 13:45:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.915 13:45:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:00.915 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:00.915 [2024-10-15 13:45:14.481750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:00.915 [2024-10-15 13:45:14.483362] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.915 [2024-10-15 13:45:14.483401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.915 [2024-10-15 13:45:14.483415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.915 [2024-10-15 13:45:14.483437] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.915 [2024-10-15 13:45:14.483445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.915 [2024-10-15 13:45:14.483454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.915 [2024-10-15 13:45:14.483463] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.915 [2024-10-15 13:45:14.483473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.915 [2024-10-15 13:45:14.483480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.915 [2024-10-15 13:45:14.483489] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.915 [2024-10-15 13:45:14.483496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.915 [2024-10-15 13:45:14.483504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.481 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:01.481 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:01.481 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:01.481 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:01.481 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:01.481 13:45:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:01.481 13:45:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.481 13:45:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:01.481 13:45:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.481 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:01.481 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:01.481 [2024-10-15 13:45:15.081754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:01.481 [2024-10-15 13:45:15.083498] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.481 [2024-10-15 13:45:15.083544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.481 [2024-10-15 13:45:15.083563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.481 [2024-10-15 13:45:15.083583] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.481 [2024-10-15 13:45:15.083594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.481 [2024-10-15 13:45:15.083604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.481 [2024-10-15 13:45:15.083615] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.481 [2024-10-15 13:45:15.083624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.481 [2024-10-15 13:45:15.083634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.481 [2024-10-15 13:45:15.083643] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.481 [2024-10-15 13:45:15.083653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.481 [2024-10-15 13:45:15.083661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.740 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:01.740 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:01.740 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:01.740 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:01.740 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:01.740 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:01.740 13:45:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.740 13:45:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:01.997 13:45:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.997 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:01.997 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:01.997 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.997 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.997 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:01.997 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:01.997 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.997 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.997 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.997 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:02.255 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:02.255 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.256 13:45:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.457 13:45:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.457 13:45:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.457 13:45:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.457 13:45:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.457 13:45:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.457 13:45:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:14.457 13:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:14.457 [2024-10-15 13:45:27.981991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:14.457 [2024-10-15 13:45:27.983603] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.457 [2024-10-15 13:45:27.983646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.457 [2024-10-15 13:45:27.983661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.457 [2024-10-15 13:45:27.983686] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.457 [2024-10-15 13:45:27.983696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.457 [2024-10-15 13:45:27.983708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.457 [2024-10-15 13:45:27.983718] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.457 [2024-10-15 13:45:27.983729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.457 [2024-10-15 13:45:27.983737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.457 [2024-10-15 13:45:27.983749] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.457 [2024-10-15 13:45:27.983757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.457 [2024-10-15 13:45:27.983768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.717 13:45:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:14.717 13:45:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.717 13:45:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.717 13:45:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.717 13:45:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.717 13:45:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.717 13:45:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.717 13:45:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.717 [2024-10-15 13:45:28.481994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:14.717 [2024-10-15 13:45:28.483584] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.717 [2024-10-15 13:45:28.483623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.717 [2024-10-15 13:45:28.483638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.717 [2024-10-15 13:45:28.483660] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.717 [2024-10-15 13:45:28.483672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.717 [2024-10-15 13:45:28.483681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.717 [2024-10-15 13:45:28.483692] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.717 [2024-10-15 13:45:28.483700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.717 [2024-10-15 13:45:28.483713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.717 [2024-10-15 13:45:28.483722] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.717 [2024-10-15 13:45:28.483732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.717 [2024-10-15 13:45:28.483740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.717 13:45:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.977 13:45:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:14.977 13:45:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:15.236 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:15.236 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.236 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.236 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.236 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.236 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.236 13:45:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.236 13:45:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.236 13:45:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.497 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:15.497 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:15.497 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.497 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.497 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:15.497 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:15.497 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.497 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.497 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.497 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:15.497 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:15.497 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.497 13:45:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@717 -- # time=46.32 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@718 -- # echo 46.32 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.32 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.32 2 00:11:27.709 remove_attach_helper took 46.32s to complete (handling 2 nvme drive(s)) 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:27.709 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:27.709 13:45:41 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:27.710 13:45:41 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:27.710 13:45:41 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:27.710 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:27.710 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:27.710 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:27.710 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:27.710 13:45:41 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:34.271 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:34.271 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.271 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.271 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.271 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.271 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:34.271 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.271 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.271 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.271 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.272 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.272 13:45:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.272 13:45:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.272 13:45:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.272 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:34.272 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:34.272 [2024-10-15 13:45:47.424019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:34.272 [2024-10-15 13:45:47.425120] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.272 [2024-10-15 13:45:47.425160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.272 [2024-10-15 13:45:47.425172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.272 [2024-10-15 13:45:47.425194] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.272 [2024-10-15 13:45:47.425202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.272 [2024-10-15 13:45:47.425212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.272 [2024-10-15 13:45:47.425229] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.272 [2024-10-15 13:45:47.425239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.272 [2024-10-15 13:45:47.425246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.272 [2024-10-15 13:45:47.425255] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.272 [2024-10-15 13:45:47.425262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.272 [2024-10-15 13:45:47.425276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.272 [2024-10-15 13:45:47.824036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:34.272 [2024-10-15 13:45:47.825147] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.273 [2024-10-15 13:45:47.825180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.273 [2024-10-15 13:45:47.825195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.273 [2024-10-15 13:45:47.825215] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.273 [2024-10-15 13:45:47.825233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.273 [2024-10-15 13:45:47.825241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.273 [2024-10-15 13:45:47.825251] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.273 [2024-10-15 13:45:47.825259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.273 [2024-10-15 13:45:47.825268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.273 [2024-10-15 13:45:47.825275] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.273 [2024-10-15 13:45:47.825284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.273 [2024-10-15 13:45:47.825291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.273 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:34.273 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.273 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.273 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.273 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.273 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.273 13:45:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.273 13:45:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.274 13:45:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.274 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:34.274 13:45:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:34.274 13:45:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:34.274 13:45:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:34.274 13:45:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:34.538 13:45:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:34.538 13:45:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:34.538 13:45:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:34.538 13:45:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:34.538 13:45:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:34.538 13:45:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:34.538 13:45:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:34.538 13:45:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.742 13:46:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.742 13:46:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.742 13:46:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.742 13:46:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.742 13:46:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.742 13:46:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:46.742 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:46.742 [2024-10-15 13:46:00.324285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:46.742 [2024-10-15 13:46:00.325444] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.742 [2024-10-15 13:46:00.325478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.742 [2024-10-15 13:46:00.325489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.742 [2024-10-15 13:46:00.325510] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.742 [2024-10-15 13:46:00.325518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.742 [2024-10-15 13:46:00.325527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.742 [2024-10-15 13:46:00.325534] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.742 [2024-10-15 13:46:00.325542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.742 [2024-10-15 13:46:00.325550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.742 [2024-10-15 13:46:00.325558] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.742 [2024-10-15 13:46:00.325565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.742 [2024-10-15 13:46:00.325573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.001 [2024-10-15 13:46:00.724293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:47.001 [2024-10-15 13:46:00.725514] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.001 [2024-10-15 13:46:00.725547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.001 [2024-10-15 13:46:00.725560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.001 [2024-10-15 13:46:00.725579] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.001 [2024-10-15 13:46:00.725592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.001 [2024-10-15 13:46:00.725599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.001 [2024-10-15 13:46:00.725609] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.001 [2024-10-15 13:46:00.725616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.001 [2024-10-15 13:46:00.725625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.001 [2024-10-15 13:46:00.725632] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.001 [2024-10-15 13:46:00.725640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.001 [2024-10-15 13:46:00.725647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.258 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:47.258 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.258 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.258 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.258 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.258 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.258 13:46:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.258 13:46:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.258 13:46:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.258 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:47.258 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:47.258 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:47.258 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:47.258 13:46:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:47.258 13:46:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:47.258 13:46:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.258 13:46:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:47.258 13:46:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:47.258 13:46:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:47.516 13:46:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:47.516 13:46:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.516 13:46:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.714 13:46:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.714 13:46:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.714 13:46:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.714 13:46:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.714 13:46:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:59.714 13:46:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:59.714 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:59.715 [2024-10-15 13:46:13.224490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:59.715 [2024-10-15 13:46:13.225757] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.715 [2024-10-15 13:46:13.225794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.715 [2024-10-15 13:46:13.225806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.715 [2024-10-15 13:46:13.225827] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.715 [2024-10-15 13:46:13.225835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.715 [2024-10-15 13:46:13.225843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.715 [2024-10-15 13:46:13.225852] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.715 [2024-10-15 13:46:13.225864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.715 [2024-10-15 13:46:13.225870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.715 [2024-10-15 13:46:13.225881] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.715 [2024-10-15 13:46:13.225888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.715 [2024-10-15 13:46:13.225896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.972 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:59.972 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:59.972 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:59.972 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:59.972 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:59.972 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.972 13:46:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.972 13:46:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.972 13:46:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.972 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:59.972 13:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:00.230 [2024-10-15 13:46:13.824492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:00.230 [2024-10-15 13:46:13.825442] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.230 [2024-10-15 13:46:13.825581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.230 [2024-10-15 13:46:13.825599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.230 [2024-10-15 13:46:13.825615] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.230 [2024-10-15 13:46:13.825625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.230 [2024-10-15 13:46:13.825632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.230 [2024-10-15 13:46:13.825641] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.230 [2024-10-15 13:46:13.825648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.230 [2024-10-15 13:46:13.825656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.230 [2024-10-15 13:46:13.825665] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.230 [2024-10-15 13:46:13.825675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.230 [2024-10-15 13:46:13.825682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.488 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:00.488 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.488 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.488 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.488 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.488 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.488 13:46:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.488 13:46:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.488 13:46:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.746 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:00.746 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:00.746 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.746 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.746 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:00.746 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:00.746 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.746 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.746 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.746 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:01.004 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:01.004 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.004 13:46:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:13.210 13:46:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:13.210 13:46:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:13.210 13:46:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:13.210 13:46:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.210 13:46:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.210 13:46:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.210 13:46:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:13.210 13:46:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.26 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.26 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:13.210 13:46:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.26 00:12:13.210 13:46:26 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.26 2 00:12:13.210 remove_attach_helper took 45.26s to complete (handling 2 nvme drive(s)) 13:46:26 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:13.210 13:46:26 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67240 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 67240 ']' 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 67240 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67240 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67240' 00:12:13.210 killing process with pid 67240 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@969 -- # kill 67240 00:12:13.210 13:46:26 sw_hotplug -- common/autotest_common.sh@974 -- # wait 67240 00:12:14.146 13:46:27 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:14.403 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:14.969 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:14.969 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:14.969 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:14.969 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:14.969 00:12:14.969 real 2m31.404s 00:12:14.969 user 1m53.614s 00:12:14.969 sys 0m16.610s 00:12:14.969 13:46:28 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.969 ************************************ 00:12:14.969 END TEST sw_hotplug 00:12:14.969 ************************************ 00:12:14.969 13:46:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.969 13:46:28 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:14.969 13:46:28 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:14.969 13:46:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:14.969 13:46:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.969 13:46:28 -- common/autotest_common.sh@10 -- # set +x 00:12:14.969 ************************************ 00:12:14.969 START TEST nvme_xnvme 00:12:14.969 ************************************ 00:12:14.969 13:46:28 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:15.228 * Looking for test storage... 00:12:15.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:15.228 13:46:28 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:15.228 13:46:28 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:15.228 13:46:28 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:15.228 13:46:28 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:15.228 13:46:28 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.228 13:46:28 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:15.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.228 --rc genhtml_branch_coverage=1 00:12:15.228 --rc genhtml_function_coverage=1 00:12:15.228 --rc genhtml_legend=1 00:12:15.228 --rc geninfo_all_blocks=1 00:12:15.228 --rc geninfo_unexecuted_blocks=1 00:12:15.228 00:12:15.228 ' 00:12:15.228 13:46:28 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:15.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.228 --rc genhtml_branch_coverage=1 00:12:15.228 --rc genhtml_function_coverage=1 00:12:15.228 --rc genhtml_legend=1 00:12:15.228 --rc geninfo_all_blocks=1 00:12:15.228 --rc geninfo_unexecuted_blocks=1 00:12:15.228 00:12:15.228 ' 00:12:15.228 13:46:28 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:15.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.228 --rc genhtml_branch_coverage=1 00:12:15.228 --rc genhtml_function_coverage=1 00:12:15.228 --rc genhtml_legend=1 00:12:15.228 --rc geninfo_all_blocks=1 00:12:15.228 --rc geninfo_unexecuted_blocks=1 00:12:15.228 00:12:15.228 ' 00:12:15.228 13:46:28 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:15.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.228 --rc genhtml_branch_coverage=1 00:12:15.228 --rc genhtml_function_coverage=1 00:12:15.228 --rc genhtml_legend=1 00:12:15.228 --rc geninfo_all_blocks=1 00:12:15.228 --rc geninfo_unexecuted_blocks=1 00:12:15.228 00:12:15.228 ' 00:12:15.228 13:46:28 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.228 13:46:28 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.229 13:46:28 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.229 13:46:28 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.229 13:46:28 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.229 13:46:28 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.229 13:46:28 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.229 13:46:28 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.229 13:46:28 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:15.229 13:46:28 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.229 13:46:28 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:15.229 13:46:28 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:15.229 13:46:28 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.229 13:46:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.229 ************************************ 00:12:15.229 START TEST xnvme_to_malloc_dd_copy 00:12:15.229 ************************************ 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:15.229 13:46:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:15.229 { 00:12:15.229 "subsystems": [ 00:12:15.229 { 00:12:15.229 "subsystem": "bdev", 00:12:15.229 "config": [ 00:12:15.229 { 00:12:15.229 "params": { 00:12:15.229 "block_size": 512, 00:12:15.229 "num_blocks": 2097152, 00:12:15.229 "name": "malloc0" 00:12:15.229 }, 00:12:15.229 "method": "bdev_malloc_create" 00:12:15.229 }, 00:12:15.229 { 00:12:15.229 "params": { 00:12:15.229 "io_mechanism": "libaio", 00:12:15.229 "filename": "/dev/nullb0", 00:12:15.229 "name": "null0" 00:12:15.229 }, 00:12:15.229 "method": "bdev_xnvme_create" 00:12:15.229 }, 00:12:15.229 { 00:12:15.229 "method": "bdev_wait_for_examine" 00:12:15.229 } 00:12:15.229 ] 00:12:15.229 } 00:12:15.229 ] 00:12:15.229 } 00:12:15.229 [2024-10-15 13:46:28.949333] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:12:15.229 [2024-10-15 13:46:28.949617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68647 ] 00:12:15.488 [2024-10-15 13:46:29.098680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.488 [2024-10-15 13:46:29.192303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.385  [2024-10-15T13:46:32.154Z] Copying: 229/1024 [MB] (229 MBps) [2024-10-15T13:46:33.526Z] Copying: 445/1024 [MB] (215 MBps) [2024-10-15T13:46:34.458Z] Copying: 694/1024 [MB] (249 MBps) [2024-10-15T13:46:34.458Z] Copying: 990/1024 [MB] (295 MBps) [2024-10-15T13:46:36.357Z] Copying: 1024/1024 [MB] (average 248 MBps) 00:12:22.569 00:12:22.569 13:46:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:22.569 13:46:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:22.569 13:46:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:22.569 13:46:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:22.569 { 00:12:22.569 "subsystems": [ 00:12:22.569 { 00:12:22.569 "subsystem": "bdev", 00:12:22.569 "config": [ 00:12:22.569 { 00:12:22.569 "params": { 00:12:22.569 "block_size": 512, 00:12:22.569 "num_blocks": 2097152, 00:12:22.569 "name": "malloc0" 00:12:22.569 }, 00:12:22.569 "method": "bdev_malloc_create" 00:12:22.569 }, 00:12:22.569 { 00:12:22.569 "params": { 00:12:22.569 "io_mechanism": "libaio", 00:12:22.569 "filename": "/dev/nullb0", 00:12:22.569 "name": "null0" 00:12:22.569 }, 00:12:22.569 "method": "bdev_xnvme_create" 00:12:22.569 }, 00:12:22.569 { 00:12:22.569 "method": "bdev_wait_for_examine" 00:12:22.569 } 00:12:22.569 ] 00:12:22.569 } 00:12:22.569 ] 00:12:22.569 } 00:12:22.569 [2024-10-15 13:46:36.273788] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:12:22.569 [2024-10-15 13:46:36.273908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68733 ] 00:12:22.827 [2024-10-15 13:46:36.422980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.827 [2024-10-15 13:46:36.506021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.728  [2024-10-15T13:46:39.451Z] Copying: 228/1024 [MB] (228 MBps) [2024-10-15T13:46:40.386Z] Copying: 512/1024 [MB] (284 MBps) [2024-10-15T13:46:41.318Z] Copying: 808/1024 [MB] (296 MBps) [2024-10-15T13:46:43.216Z] Copying: 1024/1024 [MB] (average 274 MBps) 00:12:29.428 00:12:29.428 13:46:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:29.428 13:46:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:29.428 13:46:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:29.428 13:46:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:29.428 13:46:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:29.428 13:46:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:29.428 { 00:12:29.428 "subsystems": [ 00:12:29.428 { 00:12:29.428 "subsystem": "bdev", 00:12:29.428 "config": [ 00:12:29.428 { 00:12:29.428 "params": { 00:12:29.428 "block_size": 512, 00:12:29.428 "num_blocks": 2097152, 00:12:29.428 "name": "malloc0" 00:12:29.428 }, 00:12:29.428 "method": "bdev_malloc_create" 00:12:29.428 }, 00:12:29.428 { 00:12:29.428 "params": { 00:12:29.428 "io_mechanism": "io_uring", 00:12:29.428 "filename": "/dev/nullb0", 00:12:29.428 "name": "null0" 00:12:29.428 }, 00:12:29.428 "method": "bdev_xnvme_create" 00:12:29.428 }, 00:12:29.428 { 00:12:29.428 "method": "bdev_wait_for_examine" 00:12:29.428 } 00:12:29.428 ] 00:12:29.428 } 00:12:29.428 ] 00:12:29.428 } 00:12:29.428 [2024-10-15 13:46:43.125624] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:12:29.428 [2024-10-15 13:46:43.125913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68818 ] 00:12:29.685 [2024-10-15 13:46:43.277498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.685 [2024-10-15 13:46:43.380833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.583  [2024-10-15T13:46:46.301Z] Copying: 304/1024 [MB] (304 MBps) [2024-10-15T13:46:47.253Z] Copying: 607/1024 [MB] (303 MBps) [2024-10-15T13:46:47.819Z] Copying: 911/1024 [MB] (304 MBps) [2024-10-15T13:46:49.717Z] Copying: 1024/1024 [MB] (average 303 MBps) 00:12:35.929 00:12:35.929 13:46:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:35.929 13:46:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:35.929 13:46:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:35.929 13:46:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:35.929 { 00:12:35.929 "subsystems": [ 00:12:35.929 { 00:12:35.929 "subsystem": "bdev", 00:12:35.929 "config": [ 00:12:35.929 { 00:12:35.929 "params": { 00:12:35.929 "block_size": 512, 00:12:35.929 "num_blocks": 2097152, 00:12:35.929 "name": "malloc0" 00:12:35.929 }, 00:12:35.929 "method": "bdev_malloc_create" 00:12:35.929 }, 00:12:35.929 { 00:12:35.929 "params": { 00:12:35.929 "io_mechanism": "io_uring", 00:12:35.929 "filename": "/dev/nullb0", 00:12:35.929 "name": "null0" 00:12:35.929 }, 00:12:35.929 "method": "bdev_xnvme_create" 00:12:35.929 }, 00:12:35.929 { 00:12:35.929 "method": "bdev_wait_for_examine" 00:12:35.929 } 00:12:35.929 ] 00:12:35.929 } 00:12:35.929 ] 00:12:35.929 } 00:12:35.929 [2024-10-15 13:46:49.702962] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:12:35.929 [2024-10-15 13:46:49.703086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68894 ] 00:12:36.187 [2024-10-15 13:46:49.851122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.187 [2024-10-15 13:46:49.946076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.085  [2024-10-15T13:46:52.806Z] Copying: 306/1024 [MB] (306 MBps) [2024-10-15T13:46:54.179Z] Copying: 614/1024 [MB] (308 MBps) [2024-10-15T13:46:54.179Z] Copying: 921/1024 [MB] (306 MBps) [2024-10-15T13:46:56.707Z] Copying: 1024/1024 [MB] (average 307 MBps) 00:12:42.919 00:12:42.919 13:46:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:42.919 13:46:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:42.919 00:12:42.919 real 0m27.307s 00:12:42.919 user 0m23.964s 00:12:42.919 sys 0m2.810s 00:12:42.919 13:46:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:42.919 ************************************ 00:12:42.919 13:46:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 END TEST xnvme_to_malloc_dd_copy 00:12:42.919 ************************************ 00:12:42.919 13:46:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:42.919 13:46:56 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:42.919 13:46:56 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:42.919 13:46:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 ************************************ 00:12:42.919 START TEST xnvme_bdevperf 00:12:42.919 ************************************ 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:42.919 13:46:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 { 00:12:42.919 "subsystems": [ 00:12:42.919 { 00:12:42.919 "subsystem": "bdev", 00:12:42.919 "config": [ 00:12:42.919 { 00:12:42.919 "params": { 00:12:42.919 "io_mechanism": "libaio", 00:12:42.919 "filename": "/dev/nullb0", 00:12:42.919 "name": "null0" 00:12:42.919 }, 00:12:42.919 "method": "bdev_xnvme_create" 00:12:42.919 }, 00:12:42.919 { 00:12:42.919 "method": "bdev_wait_for_examine" 00:12:42.919 } 00:12:42.919 ] 00:12:42.919 } 00:12:42.919 ] 00:12:42.919 } 00:12:42.919 [2024-10-15 13:46:56.289541] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:12:42.919 [2024-10-15 13:46:56.289653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68999 ] 00:12:42.919 [2024-10-15 13:46:56.438100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.919 [2024-10-15 13:46:56.551287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.178 Running I/O for 5 seconds... 00:12:45.053 152896.00 IOPS, 597.25 MiB/s [2024-10-15T13:47:00.214Z] 166080.00 IOPS, 648.75 MiB/s [2024-10-15T13:47:01.181Z] 177173.33 IOPS, 692.08 MiB/s [2024-10-15T13:47:02.113Z] 182720.00 IOPS, 713.75 MiB/s 00:12:48.325 Latency(us) 00:12:48.325 [2024-10-15T13:47:02.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.325 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:48.325 null0 : 5.00 186328.67 727.85 0.00 0.00 340.98 112.64 2079.51 00:12:48.325 [2024-10-15T13:47:02.113Z] =================================================================================================================== 00:12:48.325 [2024-10-15T13:47:02.113Z] Total : 186328.67 727.85 0.00 0.00 340.98 112.64 2079.51 00:12:48.889 13:47:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:48.889 13:47:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:48.889 13:47:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:48.889 13:47:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:48.889 13:47:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:48.889 13:47:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:48.889 { 00:12:48.889 "subsystems": [ 00:12:48.889 { 00:12:48.889 "subsystem": "bdev", 00:12:48.889 "config": [ 00:12:48.889 { 00:12:48.889 "params": { 00:12:48.889 "io_mechanism": "io_uring", 00:12:48.890 "filename": "/dev/nullb0", 00:12:48.890 "name": "null0" 00:12:48.890 }, 00:12:48.890 "method": "bdev_xnvme_create" 00:12:48.890 }, 00:12:48.890 { 00:12:48.890 "method": "bdev_wait_for_examine" 00:12:48.890 } 00:12:48.890 ] 00:12:48.890 } 00:12:48.890 ] 00:12:48.890 } 00:12:48.890 [2024-10-15 13:47:02.502856] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:12:48.890 [2024-10-15 13:47:02.502991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69073 ] 00:12:48.890 [2024-10-15 13:47:02.653061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.146 [2024-10-15 13:47:02.754211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.463 Running I/O for 5 seconds... 00:12:51.328 230848.00 IOPS, 901.75 MiB/s [2024-10-15T13:47:06.050Z] 231168.00 IOPS, 903.00 MiB/s [2024-10-15T13:47:06.984Z] 230954.67 IOPS, 902.17 MiB/s [2024-10-15T13:47:08.397Z] 230752.00 IOPS, 901.38 MiB/s [2024-10-15T13:47:08.397Z] 230489.60 IOPS, 900.35 MiB/s 00:12:54.609 Latency(us) 00:12:54.609 [2024-10-15T13:47:08.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.609 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:54.609 null0 : 5.00 230412.40 900.05 0.00 0.00 275.35 270.97 1556.48 00:12:54.609 [2024-10-15T13:47:08.397Z] =================================================================================================================== 00:12:54.609 [2024-10-15T13:47:08.397Z] Total : 230412.40 900.05 0.00 0.00 275.35 270.97 1556.48 00:12:54.868 13:47:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:54.868 13:47:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:54.868 00:12:54.868 real 0m12.384s 00:12:54.868 user 0m9.903s 00:12:54.868 sys 0m2.249s 00:12:54.868 13:47:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.868 13:47:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:54.868 ************************************ 00:12:54.868 END TEST xnvme_bdevperf 00:12:54.868 ************************************ 00:12:54.868 00:12:54.868 real 0m39.906s 00:12:54.868 user 0m33.975s 00:12:54.868 sys 0m5.168s 00:12:54.868 13:47:08 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.868 13:47:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:54.868 ************************************ 00:12:54.868 END TEST nvme_xnvme 00:12:54.868 ************************************ 00:12:54.868 13:47:08 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:54.868 13:47:08 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:54.868 13:47:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.868 13:47:08 -- common/autotest_common.sh@10 -- # set +x 00:12:55.126 ************************************ 00:12:55.126 START TEST blockdev_xnvme 00:12:55.127 ************************************ 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:55.127 * Looking for test storage... 00:12:55.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.127 13:47:08 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:55.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.127 --rc genhtml_branch_coverage=1 00:12:55.127 --rc genhtml_function_coverage=1 00:12:55.127 --rc genhtml_legend=1 00:12:55.127 --rc geninfo_all_blocks=1 00:12:55.127 --rc geninfo_unexecuted_blocks=1 00:12:55.127 00:12:55.127 ' 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:55.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.127 --rc genhtml_branch_coverage=1 00:12:55.127 --rc genhtml_function_coverage=1 00:12:55.127 --rc genhtml_legend=1 00:12:55.127 --rc geninfo_all_blocks=1 00:12:55.127 --rc geninfo_unexecuted_blocks=1 00:12:55.127 00:12:55.127 ' 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:55.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.127 --rc genhtml_branch_coverage=1 00:12:55.127 --rc genhtml_function_coverage=1 00:12:55.127 --rc genhtml_legend=1 00:12:55.127 --rc geninfo_all_blocks=1 00:12:55.127 --rc geninfo_unexecuted_blocks=1 00:12:55.127 00:12:55.127 ' 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:55.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.127 --rc genhtml_branch_coverage=1 00:12:55.127 --rc genhtml_function_coverage=1 00:12:55.127 --rc genhtml_legend=1 00:12:55.127 --rc geninfo_all_blocks=1 00:12:55.127 --rc geninfo_unexecuted_blocks=1 00:12:55.127 00:12:55.127 ' 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69215 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:55.127 13:47:08 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69215 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 69215 ']' 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:55.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:55.127 13:47:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.127 [2024-10-15 13:47:08.890572] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:12:55.127 [2024-10-15 13:47:08.890710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69215 ] 00:12:55.386 [2024-10-15 13:47:09.036883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.386 [2024-10-15 13:47:09.154347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.320 13:47:09 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:56.320 13:47:09 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:56.320 13:47:09 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:56.320 13:47:09 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:56.320 13:47:09 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:56.320 13:47:09 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:56.320 13:47:09 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:56.578 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:56.578 Waiting for block devices as requested 00:12:56.578 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:56.835 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:56.835 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:56.835 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:02.094 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:02.094 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:02.094 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:02.095 nvme0n1 00:13:02.095 nvme1n1 00:13:02.095 nvme2n1 00:13:02.095 nvme2n2 00:13:02.095 nvme2n3 00:13:02.095 nvme3n1 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "cebb5f80-bfd2-4c48-be64-349e870d4b50"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cebb5f80-bfd2-4c48-be64-349e870d4b50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a8a6e0b4-f991-41a7-b789-6d67fc265a3a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a8a6e0b4-f991-41a7-b789-6d67fc265a3a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c30b600f-f60d-4e55-9be6-c45366400e7e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c30b600f-f60d-4e55-9be6-c45366400e7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "52b4a116-66ab-48a4-9c85-0d2cceb01f6e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52b4a116-66ab-48a4-9c85-0d2cceb01f6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c28f16b1-e842-4292-a4bb-8f1c4e10d7c3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c28f16b1-e842-4292-a4bb-8f1c4e10d7c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "678e76ad-7e6a-4f1a-ba6f-a98207554d4a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "678e76ad-7e6a-4f1a-ba6f-a98207554d4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:02.095 13:47:15 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69215 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 69215 ']' 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 69215 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:02.095 13:47:15 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:02.096 13:47:15 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69215 00:13:02.096 13:47:15 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:02.096 13:47:15 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:02.096 killing process with pid 69215 00:13:02.096 13:47:15 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69215' 00:13:02.096 13:47:15 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 69215 00:13:02.096 13:47:15 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 69215 00:13:03.469 13:47:17 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:03.469 13:47:17 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:03.469 13:47:17 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:03.469 13:47:17 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.469 13:47:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.469 ************************************ 00:13:03.469 START TEST bdev_hello_world 00:13:03.469 ************************************ 00:13:03.469 13:47:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:03.469 [2024-10-15 13:47:17.152480] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:13:03.469 [2024-10-15 13:47:17.152644] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69568 ] 00:13:03.726 [2024-10-15 13:47:17.303195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.726 [2024-10-15 13:47:17.403702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.021 [2024-10-15 13:47:17.713416] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:04.021 [2024-10-15 13:47:17.713461] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:04.021 [2024-10-15 13:47:17.713477] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:04.021 [2024-10-15 13:47:17.715049] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:04.021 [2024-10-15 13:47:17.715248] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:04.021 [2024-10-15 13:47:17.715262] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:04.021 [2024-10-15 13:47:17.715411] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:04.021 00:13:04.021 [2024-10-15 13:47:17.715430] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:04.600 00:13:04.600 real 0m1.220s 00:13:04.600 user 0m0.925s 00:13:04.600 sys 0m0.184s 00:13:04.600 13:47:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:04.600 13:47:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:04.600 ************************************ 00:13:04.600 END TEST bdev_hello_world 00:13:04.600 ************************************ 00:13:04.600 13:47:18 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:04.600 13:47:18 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:04.600 13:47:18 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.600 13:47:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.600 ************************************ 00:13:04.600 START TEST bdev_bounds 00:13:04.600 ************************************ 00:13:04.600 13:47:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:04.600 13:47:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69600 00:13:04.601 13:47:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:04.601 Process bdevio pid: 69600 00:13:04.601 13:47:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69600' 00:13:04.601 13:47:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69600 00:13:04.601 13:47:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 69600 ']' 00:13:04.601 13:47:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.601 13:47:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:04.601 13:47:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:04.601 13:47:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.601 13:47:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:04.601 13:47:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:04.859 [2024-10-15 13:47:18.410556] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:13:04.859 [2024-10-15 13:47:18.410674] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69600 ] 00:13:04.859 [2024-10-15 13:47:18.559748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:05.117 [2024-10-15 13:47:18.666126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.117 [2024-10-15 13:47:18.666379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.117 [2024-10-15 13:47:18.666486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.449 13:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:05.449 13:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:05.449 13:47:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:05.709 I/O targets: 00:13:05.709 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:05.709 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:05.709 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:05.709 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:05.709 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:05.709 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:05.709 00:13:05.709 00:13:05.709 CUnit - A unit testing framework for C - Version 2.1-3 00:13:05.709 http://cunit.sourceforge.net/ 00:13:05.709 00:13:05.709 00:13:05.709 Suite: bdevio tests on: nvme3n1 00:13:05.709 Test: blockdev write read block ...passed 00:13:05.709 Test: blockdev write zeroes read block ...passed 00:13:05.709 Test: blockdev write zeroes read no split ...passed 00:13:05.709 Test: blockdev write zeroes read split ...passed 00:13:05.709 Test: blockdev write zeroes read split partial ...passed 00:13:05.709 Test: blockdev reset ...passed 00:13:05.709 Test: blockdev write read 8 blocks ...passed 00:13:05.709 Test: blockdev write read size > 128k ...passed 00:13:05.709 Test: blockdev write read invalid size ...passed 00:13:05.709 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:05.709 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:05.709 Test: blockdev write read max offset ...passed 00:13:05.709 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:05.709 Test: blockdev writev readv 8 blocks ...passed 00:13:05.709 Test: blockdev writev readv 30 x 1block ...passed 00:13:05.709 Test: blockdev writev readv block ...passed 00:13:05.709 Test: blockdev writev readv size > 128k ...passed 00:13:05.709 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:05.709 Test: blockdev comparev and writev ...passed 00:13:05.709 Test: blockdev nvme passthru rw ...passed 00:13:05.709 Test: blockdev nvme passthru vendor specific ...passed 00:13:05.709 Test: blockdev nvme admin passthru ...passed 00:13:05.709 Test: blockdev copy ...passed 00:13:05.709 Suite: bdevio tests on: nvme2n3 00:13:05.709 Test: blockdev write read block ...passed 00:13:05.709 Test: blockdev write zeroes read block ...passed 00:13:05.709 Test: blockdev write zeroes read no split ...passed 00:13:05.709 Test: blockdev write zeroes read split ...passed 00:13:05.709 Test: blockdev write zeroes read split partial ...passed 00:13:05.709 Test: blockdev reset ...passed 00:13:05.709 Test: blockdev write read 8 blocks ...passed 00:13:05.709 Test: blockdev write read size > 128k ...passed 00:13:05.709 Test: blockdev write read invalid size ...passed 00:13:05.709 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:05.709 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:05.709 Test: blockdev write read max offset ...passed 00:13:05.709 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:05.709 Test: blockdev writev readv 8 blocks ...passed 00:13:05.709 Test: blockdev writev readv 30 x 1block ...passed 00:13:05.709 Test: blockdev writev readv block ...passed 00:13:05.709 Test: blockdev writev readv size > 128k ...passed 00:13:05.709 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:05.709 Test: blockdev comparev and writev ...passed 00:13:05.709 Test: blockdev nvme passthru rw ...passed 00:13:05.709 Test: blockdev nvme passthru vendor specific ...passed 00:13:05.709 Test: blockdev nvme admin passthru ...passed 00:13:05.709 Test: blockdev copy ...passed 00:13:05.709 Suite: bdevio tests on: nvme2n2 00:13:05.709 Test: blockdev write read block ...passed 00:13:05.709 Test: blockdev write zeroes read block ...passed 00:13:05.709 Test: blockdev write zeroes read no split ...passed 00:13:05.709 Test: blockdev write zeroes read split ...passed 00:13:05.709 Test: blockdev write zeroes read split partial ...passed 00:13:05.709 Test: blockdev reset ...passed 00:13:05.709 Test: blockdev write read 8 blocks ...passed 00:13:05.709 Test: blockdev write read size > 128k ...passed 00:13:05.709 Test: blockdev write read invalid size ...passed 00:13:05.709 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:05.709 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:05.709 Test: blockdev write read max offset ...passed 00:13:05.709 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:05.709 Test: blockdev writev readv 8 blocks ...passed 00:13:05.709 Test: blockdev writev readv 30 x 1block ...passed 00:13:05.709 Test: blockdev writev readv block ...passed 00:13:05.709 Test: blockdev writev readv size > 128k ...passed 00:13:05.709 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:05.709 Test: blockdev comparev and writev ...passed 00:13:05.709 Test: blockdev nvme passthru rw ...passed 00:13:05.709 Test: blockdev nvme passthru vendor specific ...passed 00:13:05.709 Test: blockdev nvme admin passthru ...passed 00:13:05.709 Test: blockdev copy ...passed 00:13:05.709 Suite: bdevio tests on: nvme2n1 00:13:05.709 Test: blockdev write read block ...passed 00:13:05.709 Test: blockdev write zeroes read block ...passed 00:13:05.709 Test: blockdev write zeroes read no split ...passed 00:13:05.709 Test: blockdev write zeroes read split ...passed 00:13:05.967 Test: blockdev write zeroes read split partial ...passed 00:13:05.967 Test: blockdev reset ...passed 00:13:05.967 Test: blockdev write read 8 blocks ...passed 00:13:05.967 Test: blockdev write read size > 128k ...passed 00:13:05.967 Test: blockdev write read invalid size ...passed 00:13:05.967 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:05.967 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:05.967 Test: blockdev write read max offset ...passed 00:13:05.967 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:05.967 Test: blockdev writev readv 8 blocks ...passed 00:13:05.967 Test: blockdev writev readv 30 x 1block ...passed 00:13:05.967 Test: blockdev writev readv block ...passed 00:13:05.967 Test: blockdev writev readv size > 128k ...passed 00:13:05.967 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:05.967 Test: blockdev comparev and writev ...passed 00:13:05.967 Test: blockdev nvme passthru rw ...passed 00:13:05.967 Test: blockdev nvme passthru vendor specific ...passed 00:13:05.967 Test: blockdev nvme admin passthru ...passed 00:13:05.967 Test: blockdev copy ...passed 00:13:05.967 Suite: bdevio tests on: nvme1n1 00:13:05.967 Test: blockdev write read block ...passed 00:13:05.967 Test: blockdev write zeroes read block ...passed 00:13:05.967 Test: blockdev write zeroes read no split ...passed 00:13:05.967 Test: blockdev write zeroes read split ...passed 00:13:05.967 Test: blockdev write zeroes read split partial ...passed 00:13:05.967 Test: blockdev reset ...passed 00:13:05.967 Test: blockdev write read 8 blocks ...passed 00:13:05.967 Test: blockdev write read size > 128k ...passed 00:13:05.967 Test: blockdev write read invalid size ...passed 00:13:05.967 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:05.967 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:05.967 Test: blockdev write read max offset ...passed 00:13:05.967 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:05.967 Test: blockdev writev readv 8 blocks ...passed 00:13:05.967 Test: blockdev writev readv 30 x 1block ...passed 00:13:05.967 Test: blockdev writev readv block ...passed 00:13:05.967 Test: blockdev writev readv size > 128k ...passed 00:13:05.967 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:05.967 Test: blockdev comparev and writev ...passed 00:13:05.967 Test: blockdev nvme passthru rw ...passed 00:13:05.967 Test: blockdev nvme passthru vendor specific ...passed 00:13:05.967 Test: blockdev nvme admin passthru ...passed 00:13:05.967 Test: blockdev copy ...passed 00:13:05.967 Suite: bdevio tests on: nvme0n1 00:13:05.967 Test: blockdev write read block ...passed 00:13:05.967 Test: blockdev write zeroes read block ...passed 00:13:05.967 Test: blockdev write zeroes read no split ...passed 00:13:05.967 Test: blockdev write zeroes read split ...passed 00:13:05.967 Test: blockdev write zeroes read split partial ...passed 00:13:05.967 Test: blockdev reset ...passed 00:13:05.967 Test: blockdev write read 8 blocks ...passed 00:13:05.967 Test: blockdev write read size > 128k ...passed 00:13:05.967 Test: blockdev write read invalid size ...passed 00:13:05.967 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:05.967 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:05.967 Test: blockdev write read max offset ...passed 00:13:05.967 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:05.967 Test: blockdev writev readv 8 blocks ...passed 00:13:05.967 Test: blockdev writev readv 30 x 1block ...passed 00:13:05.967 Test: blockdev writev readv block ...passed 00:13:05.967 Test: blockdev writev readv size > 128k ...passed 00:13:05.967 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:05.967 Test: blockdev comparev and writev ...passed 00:13:05.967 Test: blockdev nvme passthru rw ...passed 00:13:05.967 Test: blockdev nvme passthru vendor specific ...passed 00:13:05.967 Test: blockdev nvme admin passthru ...passed 00:13:05.967 Test: blockdev copy ...passed 00:13:05.967 00:13:05.967 Run Summary: Type Total Ran Passed Failed Inactive 00:13:05.967 suites 6 6 n/a 0 0 00:13:05.967 tests 138 138 138 0 0 00:13:05.967 asserts 780 780 780 0 n/a 00:13:05.967 00:13:05.967 Elapsed time = 0.900 seconds 00:13:05.967 0 00:13:05.968 13:47:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69600 00:13:05.968 13:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 69600 ']' 00:13:05.968 13:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 69600 00:13:05.968 13:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:05.968 13:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:05.968 13:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69600 00:13:05.968 13:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:05.968 13:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:05.968 13:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69600' 00:13:05.968 killing process with pid 69600 00:13:05.968 13:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 69600 00:13:05.968 13:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 69600 00:13:06.533 13:47:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:06.533 00:13:06.533 real 0m1.960s 00:13:06.533 user 0m4.868s 00:13:06.533 sys 0m0.292s 00:13:06.533 13:47:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.533 13:47:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:06.533 ************************************ 00:13:06.533 END TEST bdev_bounds 00:13:06.533 ************************************ 00:13:06.791 13:47:20 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:06.791 13:47:20 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:06.791 13:47:20 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:06.791 13:47:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.791 ************************************ 00:13:06.791 START TEST bdev_nbd 00:13:06.791 ************************************ 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69653 00:13:06.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69653 /var/tmp/spdk-nbd.sock 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 69653 ']' 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:06.791 13:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:06.791 [2024-10-15 13:47:20.423903] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:13:06.791 [2024-10-15 13:47:20.424157] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.791 [2024-10-15 13:47:20.572185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.065 [2024-10-15 13:47:20.675807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:07.659 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.918 1+0 records in 00:13:07.918 1+0 records out 00:13:07.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369121 s, 11.1 MB/s 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:07.918 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:08.175 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:08.175 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:08.175 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:08.175 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:08.175 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.176 1+0 records in 00:13:08.176 1+0 records out 00:13:08.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587111 s, 7.0 MB/s 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:08.176 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.434 1+0 records in 00:13:08.434 1+0 records out 00:13:08.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433772 s, 9.4 MB/s 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:08.434 13:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.434 1+0 records in 00:13:08.434 1+0 records out 00:13:08.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411055 s, 10.0 MB/s 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:08.434 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:08.692 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:08.692 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:08.692 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.693 1+0 records in 00:13:08.693 1+0 records out 00:13:08.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412141 s, 9.9 MB/s 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:08.693 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.950 1+0 records in 00:13:08.950 1+0 records out 00:13:08.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375257 s, 10.9 MB/s 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.950 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:08.951 13:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:08.951 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:08.951 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:08.951 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:09.208 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:09.208 { 00:13:09.208 "nbd_device": "/dev/nbd0", 00:13:09.208 "bdev_name": "nvme0n1" 00:13:09.208 }, 00:13:09.208 { 00:13:09.208 "nbd_device": "/dev/nbd1", 00:13:09.208 "bdev_name": "nvme1n1" 00:13:09.208 }, 00:13:09.208 { 00:13:09.208 "nbd_device": "/dev/nbd2", 00:13:09.208 "bdev_name": "nvme2n1" 00:13:09.208 }, 00:13:09.208 { 00:13:09.208 "nbd_device": "/dev/nbd3", 00:13:09.208 "bdev_name": "nvme2n2" 00:13:09.208 }, 00:13:09.208 { 00:13:09.208 "nbd_device": "/dev/nbd4", 00:13:09.208 "bdev_name": "nvme2n3" 00:13:09.208 }, 00:13:09.208 { 00:13:09.208 "nbd_device": "/dev/nbd5", 00:13:09.208 "bdev_name": "nvme3n1" 00:13:09.208 } 00:13:09.208 ]' 00:13:09.208 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:09.208 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:09.208 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:09.208 { 00:13:09.208 "nbd_device": "/dev/nbd0", 00:13:09.208 "bdev_name": "nvme0n1" 00:13:09.208 }, 00:13:09.208 { 00:13:09.208 "nbd_device": "/dev/nbd1", 00:13:09.208 "bdev_name": "nvme1n1" 00:13:09.208 }, 00:13:09.208 { 00:13:09.208 "nbd_device": "/dev/nbd2", 00:13:09.208 "bdev_name": "nvme2n1" 00:13:09.208 }, 00:13:09.208 { 00:13:09.208 "nbd_device": "/dev/nbd3", 00:13:09.208 "bdev_name": "nvme2n2" 00:13:09.208 }, 00:13:09.208 { 00:13:09.208 "nbd_device": "/dev/nbd4", 00:13:09.208 "bdev_name": "nvme2n3" 00:13:09.208 }, 00:13:09.208 { 00:13:09.208 "nbd_device": "/dev/nbd5", 00:13:09.208 "bdev_name": "nvme3n1" 00:13:09.208 } 00:13:09.208 ]' 00:13:09.208 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:09.208 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.208 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:09.208 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.208 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:09.208 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.208 13:47:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:09.466 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.466 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.466 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.466 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.466 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.466 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.466 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.466 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.466 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.466 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:09.767 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:09.767 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:09.767 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:09.767 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.767 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.767 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:09.767 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.767 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.767 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.767 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.049 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:10.307 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:10.307 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:10.307 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:10.307 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.307 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.307 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:10.307 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.307 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.307 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.307 13:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:10.566 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:10.566 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:10.566 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:10.566 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.566 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.566 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:10.566 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.566 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.566 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:10.566 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.566 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.824 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:10.825 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:11.083 /dev/nbd0 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.083 1+0 records in 00:13:11.083 1+0 records out 00:13:11.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457074 s, 9.0 MB/s 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:11.083 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:11.341 /dev/nbd1 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.341 1+0 records in 00:13:11.341 1+0 records out 00:13:11.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689406 s, 5.9 MB/s 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:11.341 13:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:11.341 /dev/nbd10 00:13:11.341 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:11.341 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:11.341 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:11.341 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:11.341 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.341 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.341 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:11.341 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:11.341 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.341 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.341 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.599 1+0 records in 00:13:11.599 1+0 records out 00:13:11.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382353 s, 10.7 MB/s 00:13:11.599 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.599 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:11.599 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:11.600 /dev/nbd11 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.600 1+0 records in 00:13:11.600 1+0 records out 00:13:11.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041592 s, 9.8 MB/s 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:11.600 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:11.859 /dev/nbd12 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.859 1+0 records in 00:13:11.859 1+0 records out 00:13:11.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444636 s, 9.2 MB/s 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:11.859 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:12.116 /dev/nbd13 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.117 1+0 records in 00:13:12.117 1+0 records out 00:13:12.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527404 s, 7.8 MB/s 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.117 13:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:12.375 { 00:13:12.375 "nbd_device": "/dev/nbd0", 00:13:12.375 "bdev_name": "nvme0n1" 00:13:12.375 }, 00:13:12.375 { 00:13:12.375 "nbd_device": "/dev/nbd1", 00:13:12.375 "bdev_name": "nvme1n1" 00:13:12.375 }, 00:13:12.375 { 00:13:12.375 "nbd_device": "/dev/nbd10", 00:13:12.375 "bdev_name": "nvme2n1" 00:13:12.375 }, 00:13:12.375 { 00:13:12.375 "nbd_device": "/dev/nbd11", 00:13:12.375 "bdev_name": "nvme2n2" 00:13:12.375 }, 00:13:12.375 { 00:13:12.375 "nbd_device": "/dev/nbd12", 00:13:12.375 "bdev_name": "nvme2n3" 00:13:12.375 }, 00:13:12.375 { 00:13:12.375 "nbd_device": "/dev/nbd13", 00:13:12.375 "bdev_name": "nvme3n1" 00:13:12.375 } 00:13:12.375 ]' 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:12.375 { 00:13:12.375 "nbd_device": "/dev/nbd0", 00:13:12.375 "bdev_name": "nvme0n1" 00:13:12.375 }, 00:13:12.375 { 00:13:12.375 "nbd_device": "/dev/nbd1", 00:13:12.375 "bdev_name": "nvme1n1" 00:13:12.375 }, 00:13:12.375 { 00:13:12.375 "nbd_device": "/dev/nbd10", 00:13:12.375 "bdev_name": "nvme2n1" 00:13:12.375 }, 00:13:12.375 { 00:13:12.375 "nbd_device": "/dev/nbd11", 00:13:12.375 "bdev_name": "nvme2n2" 00:13:12.375 }, 00:13:12.375 { 00:13:12.375 "nbd_device": "/dev/nbd12", 00:13:12.375 "bdev_name": "nvme2n3" 00:13:12.375 }, 00:13:12.375 { 00:13:12.375 "nbd_device": "/dev/nbd13", 00:13:12.375 "bdev_name": "nvme3n1" 00:13:12.375 } 00:13:12.375 ]' 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:12.375 /dev/nbd1 00:13:12.375 /dev/nbd10 00:13:12.375 /dev/nbd11 00:13:12.375 /dev/nbd12 00:13:12.375 /dev/nbd13' 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:12.375 /dev/nbd1 00:13:12.375 /dev/nbd10 00:13:12.375 /dev/nbd11 00:13:12.375 /dev/nbd12 00:13:12.375 /dev/nbd13' 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:12.375 256+0 records in 00:13:12.375 256+0 records out 00:13:12.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00631118 s, 166 MB/s 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:12.375 256+0 records in 00:13:12.375 256+0 records out 00:13:12.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0620093 s, 16.9 MB/s 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.375 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:12.633 256+0 records in 00:13:12.633 256+0 records out 00:13:12.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0783764 s, 13.4 MB/s 00:13:12.633 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.633 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:12.633 256+0 records in 00:13:12.633 256+0 records out 00:13:12.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0778968 s, 13.5 MB/s 00:13:12.633 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.633 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:12.633 256+0 records in 00:13:12.633 256+0 records out 00:13:12.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0629995 s, 16.6 MB/s 00:13:12.633 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.633 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:12.891 256+0 records in 00:13:12.891 256+0 records out 00:13:12.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0708191 s, 14.8 MB/s 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:12.891 256+0 records in 00:13:12.891 256+0 records out 00:13:12.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0635646 s, 16.5 MB/s 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.891 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:13.150 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:13.150 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:13.150 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:13.150 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.150 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.150 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:13.150 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.150 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.150 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.150 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:13.408 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:13.408 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:13.408 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:13.408 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.408 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.408 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:13.408 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.408 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.408 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.408 13:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:13.408 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:13.408 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:13.408 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:13.408 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.408 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.408 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:13.408 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.408 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.408 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.408 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:13.666 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:13.666 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:13.666 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:13.666 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.666 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.666 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:13.666 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.666 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.666 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.666 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:13.923 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:13.923 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:13.923 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:13.923 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.923 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.923 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:13.923 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.923 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.923 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.923 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:14.181 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:14.181 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:14.181 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:14.181 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.181 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.181 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:14.181 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:14.181 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.181 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:14.181 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.181 13:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:14.439 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:14.697 malloc_lvol_verify 00:13:14.697 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:14.955 e04ab6f9-cd3a-4377-9121-fe6a067bda43 00:13:14.955 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:15.227 9b1dd967-803a-4ba9-85a5-f32298fb1d7b 00:13:15.227 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:15.227 /dev/nbd0 00:13:15.227 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:15.227 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:15.227 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:15.227 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:15.227 13:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:15.227 mke2fs 1.47.0 (5-Feb-2023) 00:13:15.227 Discarding device blocks: 0/4096 done 00:13:15.227 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:15.227 00:13:15.227 Allocating group tables: 0/1 done 00:13:15.227 Writing inode tables: 0/1 done 00:13:15.227 Creating journal (1024 blocks): done 00:13:15.227 Writing superblocks and filesystem accounting information: 0/1 done 00:13:15.227 00:13:15.227 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:15.227 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:15.227 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:15.227 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.227 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:15.227 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.227 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69653 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 69653 ']' 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 69653 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69653 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:15.486 killing process with pid 69653 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69653' 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 69653 00:13:15.486 13:47:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 69653 00:13:16.420 13:47:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:16.420 00:13:16.420 real 0m9.552s 00:13:16.420 user 0m13.609s 00:13:16.420 sys 0m3.229s 00:13:16.420 13:47:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.420 ************************************ 00:13:16.420 END TEST bdev_nbd 00:13:16.420 13:47:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:16.420 ************************************ 00:13:16.420 13:47:29 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:16.420 13:47:29 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:16.420 13:47:29 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:16.420 13:47:29 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:16.420 13:47:29 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:16.420 13:47:29 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.420 13:47:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.420 ************************************ 00:13:16.420 START TEST bdev_fio 00:13:16.420 ************************************ 00:13:16.420 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:16.420 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:16.420 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:16.420 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:16.420 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:16.420 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:16.420 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:16.421 13:47:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:16.421 ************************************ 00:13:16.421 START TEST bdev_fio_rw_verify 00:13:16.421 ************************************ 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:16.421 13:47:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.421 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.421 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.421 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.421 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.421 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.421 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.421 fio-3.35 00:13:16.421 Starting 6 threads 00:13:28.638 00:13:28.638 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70055: Tue Oct 15 13:47:40 2024 00:13:28.638 read: IOPS=36.7k, BW=143MiB/s (150MB/s)(1434MiB/10002msec) 00:13:28.638 slat (usec): min=2, max=1756, avg= 4.94, stdev= 7.07 00:13:28.638 clat (usec): min=82, max=162509, avg=477.10, stdev=829.29 00:13:28.638 lat (usec): min=86, max=162518, avg=482.03, stdev=829.52 00:13:28.638 clat percentiles (usec): 00:13:28.638 | 50.000th=[ 400], 99.000th=[ 1827], 99.900th=[ 3425], 00:13:28.638 | 99.990th=[ 6063], 99.999th=[162530] 00:13:28.638 write: IOPS=37.0k, BW=144MiB/s (151MB/s)(1445MiB/10002msec); 0 zone resets 00:13:28.638 slat (usec): min=10, max=5093, avg=23.81, stdev=51.78 00:13:28.638 clat (usec): min=64, max=8424, avg=597.49, stdev=383.43 00:13:28.638 lat (usec): min=86, max=8470, avg=621.30, stdev=390.72 00:13:28.638 clat percentiles (usec): 00:13:28.638 | 50.000th=[ 515], 99.000th=[ 2180], 99.900th=[ 3621], 99.990th=[ 5080], 00:13:28.638 | 99.999th=[ 7308] 00:13:28.638 bw ( KiB/s): min=95792, max=185669, per=100.00%, avg=152097.95, stdev=3918.75, samples=114 00:13:28.638 iops : min=23948, max=46417, avg=38024.00, stdev=979.65, samples=114 00:13:28.638 lat (usec) : 100=0.02%, 250=14.72%, 500=42.18%, 750=26.38%, 1000=9.41% 00:13:28.638 lat (msec) : 2=6.23%, 4=1.02%, 10=0.04%, 250=0.01% 00:13:28.638 cpu : usr=51.54%, sys=29.89%, ctx=8837, majf=0, minf=29953 00:13:28.638 IO depths : 1=12.2%, 2=24.7%, 4=50.3%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:28.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.638 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.638 issued rwts: total=367042,369924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.638 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:28.638 00:13:28.638 Run status group 0 (all jobs): 00:13:28.638 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=1434MiB (1503MB), run=10002-10002msec 00:13:28.638 WRITE: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=1445MiB (1515MB), run=10002-10002msec 00:13:28.638 ----------------------------------------------------- 00:13:28.638 Suppressions used: 00:13:28.638 count bytes template 00:13:28.638 6 48 /usr/src/fio/parse.c 00:13:28.638 2617 251232 /usr/src/fio/iolog.c 00:13:28.638 1 8 libtcmalloc_minimal.so 00:13:28.638 1 904 libcrypto.so 00:13:28.638 ----------------------------------------------------- 00:13:28.638 00:13:28.638 00:13:28.638 real 0m11.914s 00:13:28.638 user 0m32.407s 00:13:28.638 sys 0m18.228s 00:13:28.638 13:47:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.638 ************************************ 00:13:28.638 END TEST bdev_fio_rw_verify 00:13:28.638 ************************************ 00:13:28.638 13:47:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:28.639 13:47:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "cebb5f80-bfd2-4c48-be64-349e870d4b50"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cebb5f80-bfd2-4c48-be64-349e870d4b50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a8a6e0b4-f991-41a7-b789-6d67fc265a3a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a8a6e0b4-f991-41a7-b789-6d67fc265a3a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c30b600f-f60d-4e55-9be6-c45366400e7e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c30b600f-f60d-4e55-9be6-c45366400e7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "52b4a116-66ab-48a4-9c85-0d2cceb01f6e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52b4a116-66ab-48a4-9c85-0d2cceb01f6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c28f16b1-e842-4292-a4bb-8f1c4e10d7c3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c28f16b1-e842-4292-a4bb-8f1c4e10d7c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "678e76ad-7e6a-4f1a-ba6f-a98207554d4a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "678e76ad-7e6a-4f1a-ba6f-a98207554d4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:28.639 13:47:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:28.639 13:47:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.639 /home/vagrant/spdk_repo/spdk 00:13:28.639 13:47:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:28.639 13:47:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:28.639 13:47:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:28.639 ************************************ 00:13:28.639 END TEST bdev_fio 00:13:28.639 ************************************ 00:13:28.639 00:13:28.639 real 0m12.082s 00:13:28.639 user 0m32.493s 00:13:28.639 sys 0m18.297s 00:13:28.639 13:47:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.639 13:47:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:28.639 13:47:42 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:28.639 13:47:42 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:28.639 13:47:42 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:28.639 13:47:42 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.639 13:47:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.639 ************************************ 00:13:28.639 START TEST bdev_verify 00:13:28.639 ************************************ 00:13:28.639 13:47:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:28.639 [2024-10-15 13:47:42.158997] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:13:28.639 [2024-10-15 13:47:42.159123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70226 ] 00:13:28.639 [2024-10-15 13:47:42.307142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:28.640 [2024-10-15 13:47:42.420432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.640 [2024-10-15 13:47:42.420586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.209 Running I/O for 5 seconds... 00:13:31.536 23360.00 IOPS, 91.25 MiB/s [2024-10-15T13:47:46.264Z] 22000.00 IOPS, 85.94 MiB/s [2024-10-15T13:47:47.204Z] 22048.00 IOPS, 86.12 MiB/s [2024-10-15T13:47:48.153Z] 21936.00 IOPS, 85.69 MiB/s [2024-10-15T13:47:48.153Z] 21804.80 IOPS, 85.17 MiB/s 00:13:34.365 Latency(us) 00:13:34.365 [2024-10-15T13:47:48.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.365 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.365 Verification LBA range: start 0x0 length 0xa0000 00:13:34.365 nvme0n1 : 5.03 1602.29 6.26 0.00 0.00 79699.41 8721.33 78239.90 00:13:34.365 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.365 Verification LBA range: start 0xa0000 length 0xa0000 00:13:34.365 nvme0n1 : 5.07 1792.71 7.00 0.00 0.00 71254.18 6704.84 64931.05 00:13:34.365 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.365 Verification LBA range: start 0x0 length 0xbd0bd 00:13:34.365 nvme1n1 : 5.07 2070.40 8.09 0.00 0.00 61457.61 6276.33 68560.74 00:13:34.365 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.365 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:34.366 nvme1n1 : 5.06 2394.89 9.36 0.00 0.00 53146.20 5620.97 67350.84 00:13:34.366 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.366 Verification LBA range: start 0x0 length 0x80000 00:13:34.366 nvme2n1 : 5.08 1663.68 6.50 0.00 0.00 76377.65 7108.14 69367.34 00:13:34.366 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.366 Verification LBA range: start 0x80000 length 0x80000 00:13:34.366 nvme2n1 : 5.05 1824.76 7.13 0.00 0.00 69724.54 10435.35 70980.53 00:13:34.366 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.366 Verification LBA range: start 0x0 length 0x80000 00:13:34.366 nvme2n2 : 5.06 1594.98 6.23 0.00 0.00 79473.66 16837.71 79449.80 00:13:34.366 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.366 Verification LBA range: start 0x80000 length 0x80000 00:13:34.366 nvme2n2 : 5.07 1793.82 7.01 0.00 0.00 70795.11 11746.07 70173.93 00:13:34.366 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.366 Verification LBA range: start 0x0 length 0x80000 00:13:34.366 nvme2n3 : 5.07 1589.11 6.21 0.00 0.00 79570.22 7965.14 78239.90 00:13:34.366 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.366 Verification LBA range: start 0x80000 length 0x80000 00:13:34.366 nvme2n3 : 5.06 1795.85 7.02 0.00 0.00 70582.14 12451.84 70173.93 00:13:34.366 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:34.366 Verification LBA range: start 0x0 length 0x20000 00:13:34.366 nvme3n1 : 5.08 1611.34 6.29 0.00 0.00 78392.72 4310.25 77433.30 00:13:34.366 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.366 Verification LBA range: start 0x20000 length 0x20000 00:13:34.366 nvme3n1 : 5.07 1793.29 7.01 0.00 0.00 70546.14 6175.51 68560.74 00:13:34.366 [2024-10-15T13:47:48.154Z] =================================================================================================================== 00:13:34.366 [2024-10-15T13:47:48.154Z] Total : 21527.11 84.09 0.00 0.00 70795.17 4310.25 79449.80 00:13:35.306 00:13:35.306 real 0m6.692s 00:13:35.306 user 0m11.013s 00:13:35.306 sys 0m1.264s 00:13:35.306 13:47:48 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:35.306 ************************************ 00:13:35.306 END TEST bdev_verify 00:13:35.306 ************************************ 00:13:35.306 13:47:48 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:35.306 13:47:48 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:35.306 13:47:48 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:35.306 13:47:48 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:35.306 13:47:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.306 ************************************ 00:13:35.306 START TEST bdev_verify_big_io 00:13:35.306 ************************************ 00:13:35.306 13:47:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:35.306 [2024-10-15 13:47:48.949466] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:13:35.306 [2024-10-15 13:47:48.949626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70322 ] 00:13:35.565 [2024-10-15 13:47:49.105063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:35.565 [2024-10-15 13:47:49.265768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.565 [2024-10-15 13:47:49.265867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.132 Running I/O for 5 seconds... 00:13:42.209 2336.00 IOPS, 146.00 MiB/s [2024-10-15T13:47:55.997Z] 3456.50 IOPS, 216.03 MiB/s 00:13:42.209 Latency(us) 00:13:42.209 [2024-10-15T13:47:55.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.210 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.210 Verification LBA range: start 0x0 length 0xa000 00:13:42.210 nvme0n1 : 5.96 100.59 6.29 0.00 0.00 1238003.94 141154.46 2348810.24 00:13:42.210 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.210 Verification LBA range: start 0xa000 length 0xa000 00:13:42.210 nvme0n1 : 5.95 121.02 7.56 0.00 0.00 1008927.30 5646.18 1045349.61 00:13:42.210 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.210 Verification LBA range: start 0x0 length 0xbd0b 00:13:42.210 nvme1n1 : 5.97 160.89 10.06 0.00 0.00 751790.76 93968.54 832408.02 00:13:42.210 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.210 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:42.210 nvme1n1 : 5.82 122.18 7.64 0.00 0.00 968935.05 159706.19 1871304.86 00:13:42.210 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.210 Verification LBA range: start 0x0 length 0x8000 00:13:42.210 nvme2n1 : 5.97 115.21 7.20 0.00 0.00 1018185.00 137121.48 942105.21 00:13:42.210 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.210 Verification LBA range: start 0x8000 length 0x8000 00:13:42.210 nvme2n1 : 5.97 120.66 7.54 0.00 0.00 977174.32 15526.99 1161499.57 00:13:42.210 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.210 Verification LBA range: start 0x0 length 0x8000 00:13:42.210 nvme2n2 : 5.98 83.01 5.19 0.00 0.00 1363638.29 127442.31 2477865.75 00:13:42.210 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.210 Verification LBA range: start 0x8000 length 0x8000 00:13:42.210 nvme2n2 : 5.97 78.00 4.88 0.00 0.00 1468624.00 9477.51 2671449.01 00:13:42.210 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.210 Verification LBA range: start 0x0 length 0x8000 00:13:42.210 nvme2n3 : 5.98 111.05 6.94 0.00 0.00 985227.32 32062.23 1664816.05 00:13:42.210 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.210 Verification LBA range: start 0x8000 length 0x8000 00:13:42.210 nvme2n3 : 5.98 149.93 9.37 0.00 0.00 741042.69 9527.93 1109877.37 00:13:42.210 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.210 Verification LBA range: start 0x0 length 0x2000 00:13:42.210 nvme3n1 : 5.99 128.23 8.01 0.00 0.00 833591.70 2243.35 2129415.88 00:13:42.210 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.210 Verification LBA range: start 0x2000 length 0x2000 00:13:42.210 nvme3n1 : 5.97 146.03 9.13 0.00 0.00 735371.21 10536.17 1090519.04 00:13:42.210 [2024-10-15T13:47:55.998Z] =================================================================================================================== 00:13:42.210 [2024-10-15T13:47:55.998Z] Total : 1436.81 89.80 0.00 0.00 963005.43 2243.35 2671449.01 00:13:43.142 00:13:43.142 real 0m7.937s 00:13:43.142 user 0m14.424s 00:13:43.142 sys 0m0.541s 00:13:43.142 13:47:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.142 ************************************ 00:13:43.142 END TEST bdev_verify_big_io 00:13:43.142 ************************************ 00:13:43.142 13:47:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.142 13:47:56 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:43.142 13:47:56 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:43.142 13:47:56 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.142 13:47:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.142 ************************************ 00:13:43.142 START TEST bdev_write_zeroes 00:13:43.142 ************************************ 00:13:43.142 13:47:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:43.142 [2024-10-15 13:47:56.917168] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:13:43.142 [2024-10-15 13:47:56.917739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70437 ] 00:13:43.400 [2024-10-15 13:47:57.069546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.657 [2024-10-15 13:47:57.188846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.961 Running I/O for 1 seconds... 00:13:44.893 67936.00 IOPS, 265.38 MiB/s 00:13:44.893 Latency(us) 00:13:44.893 [2024-10-15T13:47:58.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.893 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:44.893 nvme0n1 : 1.02 9935.26 38.81 0.00 0.00 12872.96 7410.61 25508.63 00:13:44.893 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:44.893 nvme1n1 : 1.02 17838.30 69.68 0.00 0.00 7161.04 4889.99 19963.27 00:13:44.893 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:44.893 nvme2n1 : 1.03 9985.53 39.01 0.00 0.00 12783.20 5595.77 25710.28 00:13:44.893 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:44.893 nvme2n2 : 1.02 9906.69 38.70 0.00 0.00 12813.47 5318.50 24702.03 00:13:44.893 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:44.893 nvme2n3 : 1.02 9895.41 38.65 0.00 0.00 12819.11 5444.53 24702.03 00:13:44.893 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:44.893 nvme3n1 : 1.02 9883.35 38.61 0.00 0.00 12823.85 5545.35 25105.33 00:13:44.893 [2024-10-15T13:47:58.681Z] =================================================================================================================== 00:13:44.893 [2024-10-15T13:47:58.681Z] Total : 67444.54 263.46 0.00 0.00 11322.35 4889.99 25710.28 00:13:45.826 00:13:45.826 real 0m2.521s 00:13:45.826 user 0m1.788s 00:13:45.826 sys 0m0.586s 00:13:45.826 13:47:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.826 ************************************ 00:13:45.826 END TEST bdev_write_zeroes 00:13:45.826 ************************************ 00:13:45.826 13:47:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:45.826 13:47:59 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:45.826 13:47:59 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:45.826 13:47:59 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.826 13:47:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.826 ************************************ 00:13:45.826 START TEST bdev_json_nonenclosed 00:13:45.826 ************************************ 00:13:45.826 13:47:59 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:45.826 [2024-10-15 13:47:59.504645] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:13:45.826 [2024-10-15 13:47:59.504779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70485 ] 00:13:46.084 [2024-10-15 13:47:59.657460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.084 [2024-10-15 13:47:59.775409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.084 [2024-10-15 13:47:59.775507] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:46.084 [2024-10-15 13:47:59.775525] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:46.084 [2024-10-15 13:47:59.775539] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:46.354 00:13:46.354 real 0m0.529s 00:13:46.354 user 0m0.320s 00:13:46.354 sys 0m0.104s 00:13:46.354 13:47:59 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.354 13:47:59 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:46.354 ************************************ 00:13:46.354 END TEST bdev_json_nonenclosed 00:13:46.354 ************************************ 00:13:46.354 13:47:59 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.354 13:47:59 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:46.354 13:47:59 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:46.354 13:47:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.354 ************************************ 00:13:46.354 START TEST bdev_json_nonarray 00:13:46.354 ************************************ 00:13:46.354 13:48:00 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.354 [2024-10-15 13:48:00.072671] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:13:46.354 [2024-10-15 13:48:00.072809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70506 ] 00:13:46.611 [2024-10-15 13:48:00.219097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.611 [2024-10-15 13:48:00.323809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.611 [2024-10-15 13:48:00.323921] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:46.611 [2024-10-15 13:48:00.323955] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:46.611 [2024-10-15 13:48:00.323969] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:46.869 00:13:46.869 real 0m0.475s 00:13:46.869 user 0m0.276s 00:13:46.869 sys 0m0.096s 00:13:46.869 13:48:00 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.869 13:48:00 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:46.869 ************************************ 00:13:46.869 END TEST bdev_json_nonarray 00:13:46.869 ************************************ 00:13:46.869 13:48:00 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:46.869 13:48:00 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:46.869 13:48:00 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:46.869 13:48:00 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:46.869 13:48:00 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:46.869 13:48:00 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:46.869 13:48:00 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:46.869 13:48:00 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:46.869 13:48:00 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:46.869 13:48:00 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:46.869 13:48:00 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:46.869 13:48:00 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:47.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:13.996 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:13.996 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:00.685 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:00.685 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:00.685 00:15:00.685 real 2m3.427s 00:15:00.685 user 1m31.765s 00:15:00.685 sys 4m15.703s 00:15:00.685 13:49:12 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:00.685 ************************************ 00:15:00.685 END TEST blockdev_xnvme 00:15:00.685 13:49:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:00.685 ************************************ 00:15:00.685 13:49:12 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:00.685 13:49:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:00.685 13:49:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:00.685 13:49:12 -- common/autotest_common.sh@10 -- # set +x 00:15:00.685 ************************************ 00:15:00.685 START TEST ublk 00:15:00.685 ************************************ 00:15:00.685 13:49:12 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:00.685 * Looking for test storage... 00:15:00.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:00.685 13:49:12 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:00.685 13:49:12 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:00.685 13:49:12 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:15:00.685 13:49:12 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:00.685 13:49:12 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.685 13:49:12 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.685 13:49:12 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.685 13:49:12 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.685 13:49:12 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.685 13:49:12 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.685 13:49:12 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.685 13:49:12 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.685 13:49:12 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.685 13:49:12 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.685 13:49:12 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.685 13:49:12 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:00.685 13:49:12 ublk -- scripts/common.sh@345 -- # : 1 00:15:00.685 13:49:12 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.685 13:49:12 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.685 13:49:12 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:00.685 13:49:12 ublk -- scripts/common.sh@353 -- # local d=1 00:15:00.685 13:49:12 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.685 13:49:12 ublk -- scripts/common.sh@355 -- # echo 1 00:15:00.685 13:49:12 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.685 13:49:12 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:00.685 13:49:12 ublk -- scripts/common.sh@353 -- # local d=2 00:15:00.685 13:49:12 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.685 13:49:12 ublk -- scripts/common.sh@355 -- # echo 2 00:15:00.685 13:49:12 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.685 13:49:12 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.685 13:49:12 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.685 13:49:12 ublk -- scripts/common.sh@368 -- # return 0 00:15:00.685 13:49:12 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.685 13:49:12 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:00.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.685 --rc genhtml_branch_coverage=1 00:15:00.685 --rc genhtml_function_coverage=1 00:15:00.685 --rc genhtml_legend=1 00:15:00.685 --rc geninfo_all_blocks=1 00:15:00.685 --rc geninfo_unexecuted_blocks=1 00:15:00.685 00:15:00.685 ' 00:15:00.685 13:49:12 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:00.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.685 --rc genhtml_branch_coverage=1 00:15:00.685 --rc genhtml_function_coverage=1 00:15:00.685 --rc genhtml_legend=1 00:15:00.685 --rc geninfo_all_blocks=1 00:15:00.685 --rc geninfo_unexecuted_blocks=1 00:15:00.685 00:15:00.685 ' 00:15:00.685 13:49:12 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:00.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.685 --rc genhtml_branch_coverage=1 00:15:00.685 --rc genhtml_function_coverage=1 00:15:00.685 --rc genhtml_legend=1 00:15:00.685 --rc geninfo_all_blocks=1 00:15:00.685 --rc geninfo_unexecuted_blocks=1 00:15:00.685 00:15:00.685 ' 00:15:00.685 13:49:12 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:00.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.685 --rc genhtml_branch_coverage=1 00:15:00.685 --rc genhtml_function_coverage=1 00:15:00.685 --rc genhtml_legend=1 00:15:00.685 --rc geninfo_all_blocks=1 00:15:00.685 --rc geninfo_unexecuted_blocks=1 00:15:00.685 00:15:00.685 ' 00:15:00.685 13:49:12 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:00.685 13:49:12 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:00.685 13:49:12 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:00.685 13:49:12 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:00.685 13:49:12 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:00.685 13:49:12 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:00.685 13:49:12 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:00.685 13:49:12 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:00.685 13:49:12 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:00.685 13:49:12 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:00.685 13:49:12 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:00.685 13:49:12 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:00.685 13:49:12 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:00.685 13:49:12 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:00.685 13:49:12 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:00.685 13:49:12 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:00.685 13:49:12 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:00.685 13:49:12 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:00.685 13:49:12 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:00.685 13:49:12 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:00.685 13:49:12 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:00.685 13:49:12 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:00.685 13:49:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.685 ************************************ 00:15:00.685 START TEST test_save_ublk_config 00:15:00.685 ************************************ 00:15:00.685 13:49:12 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:15:00.685 13:49:12 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:00.685 13:49:12 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70815 00:15:00.685 13:49:12 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:00.685 13:49:12 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70815 00:15:00.685 13:49:12 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 70815 ']' 00:15:00.685 13:49:12 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:00.685 13:49:12 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.685 13:49:12 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.685 13:49:12 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.685 13:49:12 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.685 13:49:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:00.685 [2024-10-15 13:49:12.372595] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:15:00.685 [2024-10-15 13:49:12.372717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70815 ] 00:15:00.685 [2024-10-15 13:49:12.517446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.685 [2024-10-15 13:49:12.634471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.685 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.685 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:00.685 13:49:13 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:00.686 13:49:13 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:00.686 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.686 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:00.686 [2024-10-15 13:49:13.322249] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:00.686 [2024-10-15 13:49:13.323112] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:00.686 malloc0 00:15:00.686 [2024-10-15 13:49:13.386361] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:00.686 [2024-10-15 13:49:13.386453] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:00.686 [2024-10-15 13:49:13.386463] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:00.686 [2024-10-15 13:49:13.386471] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:00.686 [2024-10-15 13:49:13.395328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:00.686 [2024-10-15 13:49:13.395349] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:00.686 [2024-10-15 13:49:13.402251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:00.686 [2024-10-15 13:49:13.402365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:00.686 [2024-10-15 13:49:13.419250] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:00.686 0 00:15:00.686 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.686 13:49:13 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:00.686 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.686 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:00.686 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.686 13:49:13 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:00.686 "subsystems": [ 00:15:00.686 { 00:15:00.686 "subsystem": "fsdev", 00:15:00.686 "config": [ 00:15:00.686 { 00:15:00.686 "method": "fsdev_set_opts", 00:15:00.686 "params": { 00:15:00.686 "fsdev_io_pool_size": 65535, 00:15:00.686 "fsdev_io_cache_size": 256 00:15:00.686 } 00:15:00.686 } 00:15:00.686 ] 00:15:00.686 }, 00:15:00.686 { 00:15:00.686 "subsystem": "keyring", 00:15:00.686 "config": [] 00:15:00.686 }, 00:15:00.686 { 00:15:00.686 "subsystem": "iobuf", 00:15:00.686 "config": [ 00:15:00.686 { 00:15:00.686 "method": "iobuf_set_options", 00:15:00.686 "params": { 00:15:00.686 "small_pool_count": 8192, 00:15:00.686 "large_pool_count": 1024, 00:15:00.686 "small_bufsize": 8192, 00:15:00.686 "large_bufsize": 135168 00:15:00.686 } 00:15:00.686 } 00:15:00.686 ] 00:15:00.686 }, 00:15:00.686 { 00:15:00.686 "subsystem": "sock", 00:15:00.686 "config": [ 00:15:00.686 { 00:15:00.686 "method": "sock_set_default_impl", 00:15:00.686 "params": { 00:15:00.686 "impl_name": "posix" 00:15:00.686 } 00:15:00.686 }, 00:15:00.686 { 00:15:00.686 "method": "sock_impl_set_options", 00:15:00.686 "params": { 00:15:00.686 "impl_name": "ssl", 00:15:00.686 "recv_buf_size": 4096, 00:15:00.686 "send_buf_size": 4096, 00:15:00.686 "enable_recv_pipe": true, 00:15:00.686 "enable_quickack": false, 00:15:00.686 "enable_placement_id": 0, 00:15:00.686 "enable_zerocopy_send_server": true, 00:15:00.686 "enable_zerocopy_send_client": false, 00:15:00.686 "zerocopy_threshold": 0, 00:15:00.686 "tls_version": 0, 00:15:00.686 "enable_ktls": false 00:15:00.686 } 00:15:00.686 }, 00:15:00.686 { 00:15:00.686 "method": "sock_impl_set_options", 00:15:00.686 "params": { 00:15:00.686 "impl_name": "posix", 00:15:00.686 "recv_buf_size": 2097152, 00:15:00.686 "send_buf_size": 2097152, 00:15:00.686 "enable_recv_pipe": true, 00:15:00.686 "enable_quickack": false, 00:15:00.686 "enable_placement_id": 0, 00:15:00.686 "enable_zerocopy_send_server": true, 00:15:00.686 "enable_zerocopy_send_client": false, 00:15:00.686 "zerocopy_threshold": 0, 00:15:00.686 "tls_version": 0, 00:15:00.686 "enable_ktls": false 00:15:00.686 } 00:15:00.686 } 00:15:00.686 ] 00:15:00.686 }, 00:15:00.686 { 00:15:00.686 "subsystem": "vmd", 00:15:00.686 "config": [] 00:15:00.686 }, 00:15:00.686 { 00:15:00.686 "subsystem": "accel", 00:15:00.686 "config": [ 00:15:00.686 { 00:15:00.686 "method": "accel_set_options", 00:15:00.686 "params": { 00:15:00.686 "small_cache_size": 128, 00:15:00.686 "large_cache_size": 16, 00:15:00.686 "task_count": 2048, 00:15:00.686 "sequence_count": 2048, 00:15:00.686 "buf_count": 2048 00:15:00.686 } 00:15:00.686 } 00:15:00.686 ] 00:15:00.686 }, 00:15:00.686 { 00:15:00.686 "subsystem": "bdev", 00:15:00.686 "config": [ 00:15:00.686 { 00:15:00.686 "method": "bdev_set_options", 00:15:00.686 "params": { 00:15:00.686 "bdev_io_pool_size": 65535, 00:15:00.686 "bdev_io_cache_size": 256, 00:15:00.686 "bdev_auto_examine": true, 00:15:00.686 "iobuf_small_cache_size": 128, 00:15:00.686 "iobuf_large_cache_size": 16 00:15:00.686 } 00:15:00.686 }, 00:15:00.686 { 00:15:00.686 "method": "bdev_raid_set_options", 00:15:00.686 "params": { 00:15:00.686 "process_window_size_kb": 1024, 00:15:00.686 "process_max_bandwidth_mb_sec": 0 00:15:00.686 } 00:15:00.686 }, 00:15:00.686 { 00:15:00.686 "method": "bdev_iscsi_set_options", 00:15:00.686 "params": { 00:15:00.686 "timeout_sec": 30 00:15:00.686 } 00:15:00.686 }, 00:15:00.686 { 00:15:00.686 "method": "bdev_nvme_set_options", 00:15:00.686 "params": { 00:15:00.686 "action_on_timeout": "none", 00:15:00.686 "timeout_us": 0, 00:15:00.686 "timeout_admin_us": 0, 00:15:00.686 "keep_alive_timeout_ms": 10000, 00:15:00.686 "arbitration_burst": 0, 00:15:00.686 "low_priority_weight": 0, 00:15:00.686 "medium_priority_weight": 0, 00:15:00.686 "high_priority_weight": 0, 00:15:00.686 "nvme_adminq_poll_period_us": 10000, 00:15:00.686 "nvme_ioq_poll_period_us": 0, 00:15:00.686 "io_queue_requests": 0, 00:15:00.686 "delay_cmd_submit": true, 00:15:00.686 "transport_retry_count": 4, 00:15:00.686 "bdev_retry_count": 3, 00:15:00.686 "transport_ack_timeout": 0, 00:15:00.686 "ctrlr_loss_timeout_sec": 0, 00:15:00.686 "reconnect_delay_sec": 0, 00:15:00.686 "fast_io_fail_timeout_sec": 0, 00:15:00.686 "disable_auto_failback": false, 00:15:00.686 "generate_uuids": false, 00:15:00.686 "transport_tos": 0, 00:15:00.686 "nvme_error_stat": false, 00:15:00.686 "rdma_srq_size": 0, 00:15:00.686 "io_path_stat": false, 00:15:00.686 "allow_accel_sequence": false, 00:15:00.686 "rdma_max_cq_size": 0, 00:15:00.686 "rdma_cm_event_timeout_ms": 0, 00:15:00.686 "dhchap_digests": [ 00:15:00.686 "sha256", 00:15:00.686 "sha384", 00:15:00.686 "sha512" 00:15:00.686 ], 00:15:00.686 "dhchap_dhgroups": [ 00:15:00.686 "null", 00:15:00.686 "ffdhe2048", 00:15:00.686 "ffdhe3072", 00:15:00.686 "ffdhe4096", 00:15:00.686 "ffdhe6144", 00:15:00.686 "ffdhe8192" 00:15:00.686 ] 00:15:00.686 } 00:15:00.686 }, 00:15:00.686 { 00:15:00.686 "method": "bdev_nvme_set_hotplug", 00:15:00.686 "params": { 00:15:00.686 "period_us": 100000, 00:15:00.686 "enable": false 00:15:00.686 } 00:15:00.686 }, 00:15:00.686 { 00:15:00.686 "method": "bdev_malloc_create", 00:15:00.686 "params": { 00:15:00.686 "name": "malloc0", 00:15:00.686 "num_blocks": 8192, 00:15:00.686 "block_size": 4096, 00:15:00.686 "physical_block_size": 4096, 00:15:00.686 "uuid": "d64d82ed-be6c-4460-bdc2-75e425f0f0c8", 00:15:00.686 "optimal_io_boundary": 0, 00:15:00.686 "md_size": 0, 00:15:00.686 "dif_type": 0, 00:15:00.686 "dif_is_head_of_md": false, 00:15:00.687 "dif_pi_format": 0 00:15:00.687 } 00:15:00.687 }, 00:15:00.687 { 00:15:00.687 "method": "bdev_wait_for_examine" 00:15:00.687 } 00:15:00.687 ] 00:15:00.687 }, 00:15:00.687 { 00:15:00.687 "subsystem": "scsi", 00:15:00.687 "config": null 00:15:00.687 }, 00:15:00.687 { 00:15:00.687 "subsystem": "scheduler", 00:15:00.687 "config": [ 00:15:00.687 { 00:15:00.687 "method": "framework_set_scheduler", 00:15:00.687 "params": { 00:15:00.687 "name": "static" 00:15:00.687 } 00:15:00.687 } 00:15:00.687 ] 00:15:00.687 }, 00:15:00.687 { 00:15:00.687 "subsystem": "vhost_scsi", 00:15:00.687 "config": [] 00:15:00.687 }, 00:15:00.687 { 00:15:00.687 "subsystem": "vhost_blk", 00:15:00.687 "config": [] 00:15:00.687 }, 00:15:00.687 { 00:15:00.687 "subsystem": "ublk", 00:15:00.687 "config": [ 00:15:00.687 { 00:15:00.687 "method": "ublk_create_target", 00:15:00.687 "params": { 00:15:00.687 "cpumask": "1" 00:15:00.687 } 00:15:00.687 }, 00:15:00.687 { 00:15:00.687 "method": "ublk_start_disk", 00:15:00.687 "params": { 00:15:00.687 "bdev_name": "malloc0", 00:15:00.687 "ublk_id": 0, 00:15:00.687 "num_queues": 1, 00:15:00.687 "queue_depth": 128 00:15:00.687 } 00:15:00.687 } 00:15:00.687 ] 00:15:00.687 }, 00:15:00.687 { 00:15:00.687 "subsystem": "nbd", 00:15:00.687 "config": [] 00:15:00.687 }, 00:15:00.687 { 00:15:00.687 "subsystem": "nvmf", 00:15:00.687 "config": [ 00:15:00.687 { 00:15:00.687 "method": "nvmf_set_config", 00:15:00.687 "params": { 00:15:00.687 "discovery_filter": "match_any", 00:15:00.687 "admin_cmd_passthru": { 00:15:00.687 "identify_ctrlr": false 00:15:00.687 }, 00:15:00.687 "dhchap_digests": [ 00:15:00.687 "sha256", 00:15:00.687 "sha384", 00:15:00.687 "sha512" 00:15:00.687 ], 00:15:00.687 "dhchap_dhgroups": [ 00:15:00.687 "null", 00:15:00.687 "ffdhe2048", 00:15:00.687 "ffdhe3072", 00:15:00.687 "ffdhe4096", 00:15:00.687 "ffdhe6144", 00:15:00.687 "ffdhe8192" 00:15:00.687 ] 00:15:00.687 } 00:15:00.687 }, 00:15:00.687 { 00:15:00.687 "method": "nvmf_set_max_subsystems", 00:15:00.687 "params": { 00:15:00.687 "max_subsystems": 1024 00:15:00.687 } 00:15:00.687 }, 00:15:00.687 { 00:15:00.687 "method": "nvmf_set_crdt", 00:15:00.687 "params": { 00:15:00.687 "crdt1": 0, 00:15:00.687 "crdt2": 0, 00:15:00.687 "crdt3": 0 00:15:00.687 } 00:15:00.687 } 00:15:00.687 ] 00:15:00.687 }, 00:15:00.687 { 00:15:00.687 "subsystem": "iscsi", 00:15:00.687 "config": [ 00:15:00.687 { 00:15:00.687 "method": "iscsi_set_options", 00:15:00.687 "params": { 00:15:00.687 "node_base": "iqn.2016-06.io.spdk", 00:15:00.687 "max_sessions": 128, 00:15:00.687 "max_connections_per_session": 2, 00:15:00.687 "max_queue_depth": 64, 00:15:00.687 "default_time2wait": 2, 00:15:00.687 "default_time2retain": 20, 00:15:00.687 "first_burst_length": 8192, 00:15:00.687 "immediate_data": true, 00:15:00.687 "allow_duplicated_isid": false, 00:15:00.687 "error_recovery_level": 0, 00:15:00.687 "nop_timeout": 60, 00:15:00.687 "nop_in_interval": 30, 00:15:00.687 "disable_chap": false, 00:15:00.687 "require_chap": false, 00:15:00.687 "mutual_chap": false, 00:15:00.687 "chap_group": 0, 00:15:00.687 "max_large_datain_per_connection": 64, 00:15:00.687 "max_r2t_per_connection": 4, 00:15:00.687 "pdu_pool_size": 36864, 00:15:00.687 "immediate_data_pool_size": 16384, 00:15:00.687 "data_out_pool_size": 2048 00:15:00.687 } 00:15:00.687 } 00:15:00.687 ] 00:15:00.687 } 00:15:00.687 ] 00:15:00.687 }' 00:15:00.687 13:49:13 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70815 00:15:00.687 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 70815 ']' 00:15:00.687 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 70815 00:15:00.687 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:00.687 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.687 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70815 00:15:00.687 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:00.687 killing process with pid 70815 00:15:00.687 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:00.687 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70815' 00:15:00.687 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 70815 00:15:00.687 13:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 70815 00:15:01.260 [2024-10-15 13:49:14.853339] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:01.260 [2024-10-15 13:49:14.881292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:01.260 [2024-10-15 13:49:14.881488] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:01.260 [2024-10-15 13:49:14.889282] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:01.261 [2024-10-15 13:49:14.889355] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:01.261 [2024-10-15 13:49:14.889372] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:01.261 [2024-10-15 13:49:14.889405] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:01.261 [2024-10-15 13:49:14.889590] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:02.679 13:49:16 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70872 00:15:02.679 13:49:16 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70872 00:15:02.679 13:49:16 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 70872 ']' 00:15:02.679 13:49:16 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.679 13:49:16 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.679 13:49:16 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.679 13:49:16 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.679 13:49:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:02.679 13:49:16 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:02.679 13:49:16 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:02.679 "subsystems": [ 00:15:02.679 { 00:15:02.679 "subsystem": "fsdev", 00:15:02.679 "config": [ 00:15:02.679 { 00:15:02.679 "method": "fsdev_set_opts", 00:15:02.679 "params": { 00:15:02.679 "fsdev_io_pool_size": 65535, 00:15:02.679 "fsdev_io_cache_size": 256 00:15:02.679 } 00:15:02.679 } 00:15:02.679 ] 00:15:02.679 }, 00:15:02.679 { 00:15:02.679 "subsystem": "keyring", 00:15:02.679 "config": [] 00:15:02.679 }, 00:15:02.679 { 00:15:02.679 "subsystem": "iobuf", 00:15:02.679 "config": [ 00:15:02.679 { 00:15:02.679 "method": "iobuf_set_options", 00:15:02.679 "params": { 00:15:02.679 "small_pool_count": 8192, 00:15:02.679 "large_pool_count": 1024, 00:15:02.679 "small_bufsize": 8192, 00:15:02.679 "large_bufsize": 135168 00:15:02.679 } 00:15:02.679 } 00:15:02.679 ] 00:15:02.679 }, 00:15:02.679 { 00:15:02.679 "subsystem": "sock", 00:15:02.679 "config": [ 00:15:02.679 { 00:15:02.679 "method": "sock_set_default_impl", 00:15:02.679 "params": { 00:15:02.679 "impl_name": "posix" 00:15:02.679 } 00:15:02.679 }, 00:15:02.679 { 00:15:02.679 "method": "sock_impl_set_options", 00:15:02.679 "params": { 00:15:02.679 "impl_name": "ssl", 00:15:02.679 "recv_buf_size": 4096, 00:15:02.679 "send_buf_size": 4096, 00:15:02.679 "enable_recv_pipe": true, 00:15:02.679 "enable_quickack": false, 00:15:02.679 "enable_placement_id": 0, 00:15:02.679 "enable_zerocopy_send_server": true, 00:15:02.679 "enable_zerocopy_send_client": false, 00:15:02.679 "zerocopy_threshold": 0, 00:15:02.679 "tls_version": 0, 00:15:02.679 "enable_ktls": false 00:15:02.679 } 00:15:02.679 }, 00:15:02.679 { 00:15:02.679 "method": "sock_impl_set_options", 00:15:02.679 "params": { 00:15:02.679 "impl_name": "posix", 00:15:02.679 "recv_buf_size": 2097152, 00:15:02.679 "send_buf_size": 2097152, 00:15:02.679 "enable_recv_pipe": true, 00:15:02.679 "enable_quickack": false, 00:15:02.679 "enable_placement_id": 0, 00:15:02.679 "enable_zerocopy_send_server": true, 00:15:02.679 "enable_zerocopy_send_client": false, 00:15:02.679 "zerocopy_threshold": 0, 00:15:02.679 "tls_version": 0, 00:15:02.679 "enable_ktls": false 00:15:02.679 } 00:15:02.679 } 00:15:02.679 ] 00:15:02.679 }, 00:15:02.679 { 00:15:02.679 "subsystem": "vmd", 00:15:02.679 "config": [] 00:15:02.679 }, 00:15:02.679 { 00:15:02.679 "subsystem": "accel", 00:15:02.679 "config": [ 00:15:02.679 { 00:15:02.679 "method": "accel_set_options", 00:15:02.679 "params": { 00:15:02.679 "small_cache_size": 128, 00:15:02.679 "large_cache_size": 16, 00:15:02.679 "task_count": 2048, 00:15:02.679 "sequence_count": 2048, 00:15:02.679 "buf_count": 2048 00:15:02.679 } 00:15:02.679 } 00:15:02.679 ] 00:15:02.679 }, 00:15:02.679 { 00:15:02.679 "subsystem": "bdev", 00:15:02.679 "config": [ 00:15:02.679 { 00:15:02.679 "method": "bdev_set_options", 00:15:02.679 "params": { 00:15:02.679 "bdev_io_pool_size": 65535, 00:15:02.679 "bdev_io_cache_size": 256, 00:15:02.679 "bdev_auto_examine": true, 00:15:02.679 "iobuf_small_cache_size": 128, 00:15:02.679 "iobuf_large_cache_size": 16 00:15:02.679 } 00:15:02.679 }, 00:15:02.679 { 00:15:02.679 "method": "bdev_raid_set_options", 00:15:02.679 "params": { 00:15:02.679 "process_window_size_kb": 1024, 00:15:02.679 "process_max_bandwidth_mb_sec": 0 00:15:02.679 } 00:15:02.679 }, 00:15:02.679 { 00:15:02.679 "method": "bdev_iscsi_set_options", 00:15:02.679 "params": { 00:15:02.679 "timeout_sec": 30 00:15:02.679 } 00:15:02.679 }, 00:15:02.679 { 00:15:02.679 "method": "bdev_nvme_set_options", 00:15:02.679 "params": { 00:15:02.679 "action_on_timeout": "none", 00:15:02.679 "timeout_us": 0, 00:15:02.679 "timeout_admin_us": 0, 00:15:02.679 "keep_alive_timeout_ms": 10000, 00:15:02.679 "arbitration_burst": 0, 00:15:02.679 "low_priority_weight": 0, 00:15:02.679 "medium_priority_weight": 0, 00:15:02.679 "high_priority_weight": 0, 00:15:02.679 "nvme_adminq_poll_period_us": 10000, 00:15:02.679 "nvme_ioq_poll_period_us": 0, 00:15:02.679 "io_queue_requests": 0, 00:15:02.679 "delay_cmd_submit": true, 00:15:02.679 "transport_retry_count": 4, 00:15:02.679 "bdev_retry_count": 3, 00:15:02.679 "transport_ack_timeout": 0, 00:15:02.679 "ctrlr_loss_timeout_sec": 0, 00:15:02.679 "reconnect_delay_sec": 0, 00:15:02.679 "fast_io_fail_timeout_sec": 0, 00:15:02.679 "disable_auto_failback": false, 00:15:02.679 "generate_uuids": false, 00:15:02.679 "transport_tos": 0, 00:15:02.679 "nvme_error_stat": false, 00:15:02.679 "rdma_srq_size": 0, 00:15:02.679 "io_path_stat": false, 00:15:02.679 "allow_accel_sequence": false, 00:15:02.680 "rdma_max_cq_size": 0, 00:15:02.680 "rdma_cm_event_timeout_ms": 0, 00:15:02.680 "dhchap_digests": [ 00:15:02.680 "sha256", 00:15:02.680 "sha384", 00:15:02.680 "sha512" 00:15:02.680 ], 00:15:02.680 "dhchap_dhgroups": [ 00:15:02.680 "null", 00:15:02.680 "ffdhe2048", 00:15:02.680 "ffdhe3072", 00:15:02.680 "ffdhe4096", 00:15:02.680 "ffdhe6144", 00:15:02.680 "ffdhe8192" 00:15:02.680 ] 00:15:02.680 } 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "method": "bdev_nvme_set_hotplug", 00:15:02.680 "params": { 00:15:02.680 "period_us": 100000, 00:15:02.680 "enable": false 00:15:02.680 } 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "method": "bdev_malloc_create", 00:15:02.680 "params": { 00:15:02.680 "name": "malloc0", 00:15:02.680 "num_blocks": 8192, 00:15:02.680 "block_size": 4096, 00:15:02.680 "physical_block_size": 4096, 00:15:02.680 "uuid": "d64d82ed-be6c-4460-bdc2-75e425f0f0c8", 00:15:02.680 "optimal_io_boundary": 0, 00:15:02.680 "md_size": 0, 00:15:02.680 "dif_type": 0, 00:15:02.680 "dif_is_head_of_md": false, 00:15:02.680 "dif_pi_format": 0 00:15:02.680 } 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "method": "bdev_wait_for_examine" 00:15:02.680 } 00:15:02.680 ] 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "subsystem": "scsi", 00:15:02.680 "config": null 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "subsystem": "scheduler", 00:15:02.680 "config": [ 00:15:02.680 { 00:15:02.680 "method": "framework_set_scheduler", 00:15:02.680 "params": { 00:15:02.680 "name": "static" 00:15:02.680 } 00:15:02.680 } 00:15:02.680 ] 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "subsystem": "vhost_scsi", 00:15:02.680 "config": [] 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "subsystem": "vhost_blk", 00:15:02.680 "config": [] 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "subsystem": "ublk", 00:15:02.680 "config": [ 00:15:02.680 { 00:15:02.680 "method": "ublk_create_target", 00:15:02.680 "params": { 00:15:02.680 "cpumask": "1" 00:15:02.680 } 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "method": "ublk_start_disk", 00:15:02.680 "params": { 00:15:02.680 "bdev_name": "malloc0", 00:15:02.680 "ublk_id": 0, 00:15:02.680 "num_queues": 1, 00:15:02.680 "queue_depth": 128 00:15:02.680 } 00:15:02.680 } 00:15:02.680 ] 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "subsystem": "nbd", 00:15:02.680 "config": [] 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "subsystem": "nvmf", 00:15:02.680 "config": [ 00:15:02.680 { 00:15:02.680 "method": "nvmf_set_config", 00:15:02.680 "params": { 00:15:02.680 "discovery_filter": "match_any", 00:15:02.680 "admin_cmd_passthru": { 00:15:02.680 "identify_ctrlr": false 00:15:02.680 }, 00:15:02.680 "dhchap_digests": [ 00:15:02.680 "sha256", 00:15:02.680 "sha384", 00:15:02.680 "sha512" 00:15:02.680 ], 00:15:02.680 "dhchap_dhgroups": [ 00:15:02.680 "null", 00:15:02.680 "ffdhe2048", 00:15:02.680 "ffdhe3072", 00:15:02.680 "ffdhe4096", 00:15:02.680 "ffdhe6144", 00:15:02.680 "ffdhe8192" 00:15:02.680 ] 00:15:02.680 } 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "method": "nvmf_set_max_subsystems", 00:15:02.680 "params": { 00:15:02.680 "max_subsystems": 1024 00:15:02.680 } 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "method": "nvmf_set_crdt", 00:15:02.680 "params": { 00:15:02.680 "crdt1": 0, 00:15:02.680 "crdt2": 0, 00:15:02.680 "crdt3": 0 00:15:02.680 } 00:15:02.680 } 00:15:02.680 ] 00:15:02.680 }, 00:15:02.680 { 00:15:02.680 "subsystem": "iscsi", 00:15:02.680 "config": [ 00:15:02.680 { 00:15:02.680 "method": "iscsi_set_options", 00:15:02.680 "params": { 00:15:02.680 "node_base": "iqn.2016-06.io.spdk", 00:15:02.680 "max_sessions": 128, 00:15:02.680 "max_connections_per_session": 2, 00:15:02.680 "max_queue_depth": 64, 00:15:02.680 "default_time2wait": 2, 00:15:02.680 "default_time2retain": 20, 00:15:02.680 "first_burst_length": 8192, 00:15:02.680 "immediate_data": true, 00:15:02.680 "allow_duplicated_isid": false, 00:15:02.680 "error_recovery_level": 0, 00:15:02.680 "nop_timeout": 60, 00:15:02.680 "nop_in_interval": 30, 00:15:02.680 "disable_chap": false, 00:15:02.680 "require_chap": false, 00:15:02.680 "mutual_chap": false, 00:15:02.680 "chap_group": 0, 00:15:02.680 "max_large_datain_per_connection": 64, 00:15:02.680 "max_r2t_per_connection": 4, 00:15:02.680 "pdu_pool_size": 36864, 00:15:02.680 "immediate_data_pool_size": 16384, 00:15:02.680 "data_out_pool_size": 2048 00:15:02.680 } 00:15:02.680 } 00:15:02.680 ] 00:15:02.680 } 00:15:02.680 ] 00:15:02.680 }' 00:15:02.680 [2024-10-15 13:49:16.406869] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:15:02.680 [2024-10-15 13:49:16.407376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70872 ] 00:15:02.940 [2024-10-15 13:49:16.560925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.940 [2024-10-15 13:49:16.680593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.884 [2024-10-15 13:49:17.511240] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:03.884 [2024-10-15 13:49:17.512116] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:03.884 [2024-10-15 13:49:17.519358] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:03.884 [2024-10-15 13:49:17.519435] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:03.884 [2024-10-15 13:49:17.519445] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:03.884 [2024-10-15 13:49:17.519453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:03.884 [2024-10-15 13:49:17.528320] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:03.884 [2024-10-15 13:49:17.528342] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:03.884 [2024-10-15 13:49:17.535252] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:03.884 [2024-10-15 13:49:17.535348] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:03.884 [2024-10-15 13:49:17.552242] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:03.884 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.884 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:03.884 13:49:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:03.884 13:49:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:03.884 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.884 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:03.884 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.884 13:49:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:03.885 13:49:17 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:03.885 13:49:17 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70872 00:15:03.885 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 70872 ']' 00:15:03.885 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 70872 00:15:03.885 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:03.885 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.885 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70872 00:15:03.885 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:03.885 killing process with pid 70872 00:15:03.885 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:03.885 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70872' 00:15:03.885 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 70872 00:15:03.885 13:49:17 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 70872 00:15:05.272 [2024-10-15 13:49:18.845158] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:05.272 [2024-10-15 13:49:18.889260] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:05.272 [2024-10-15 13:49:18.889401] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:05.272 [2024-10-15 13:49:18.900257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:05.272 [2024-10-15 13:49:18.900308] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:05.272 [2024-10-15 13:49:18.900316] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:05.272 [2024-10-15 13:49:18.900347] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:05.272 [2024-10-15 13:49:18.900492] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:06.659 13:49:20 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:06.659 00:15:06.659 real 0m8.022s 00:15:06.659 user 0m5.723s 00:15:06.659 sys 0m2.918s 00:15:06.659 13:49:20 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:06.659 13:49:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:06.659 ************************************ 00:15:06.659 END TEST test_save_ublk_config 00:15:06.659 ************************************ 00:15:06.659 13:49:20 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70951 00:15:06.659 13:49:20 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:06.659 13:49:20 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70951 00:15:06.659 13:49:20 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:06.659 13:49:20 ublk -- common/autotest_common.sh@831 -- # '[' -z 70951 ']' 00:15:06.659 13:49:20 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.659 13:49:20 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.659 13:49:20 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.659 13:49:20 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.659 13:49:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:06.659 [2024-10-15 13:49:20.431250] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:15:06.659 [2024-10-15 13:49:20.431380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70951 ] 00:15:06.920 [2024-10-15 13:49:20.582042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:07.182 [2024-10-15 13:49:20.709331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.182 [2024-10-15 13:49:20.709563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.753 13:49:21 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:07.753 13:49:21 ublk -- common/autotest_common.sh@864 -- # return 0 00:15:07.753 13:49:21 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:07.753 13:49:21 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:07.753 13:49:21 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.753 13:49:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.753 ************************************ 00:15:07.753 START TEST test_create_ublk 00:15:07.753 ************************************ 00:15:07.753 13:49:21 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:15:07.753 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:07.753 13:49:21 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.753 13:49:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.753 [2024-10-15 13:49:21.413248] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:07.753 [2024-10-15 13:49:21.415328] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:07.753 13:49:21 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.753 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:07.753 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:07.753 13:49:21 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.753 13:49:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.015 13:49:21 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.015 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:08.015 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:08.015 13:49:21 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.015 13:49:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.015 [2024-10-15 13:49:21.629448] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:08.015 [2024-10-15 13:49:21.629939] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:08.015 [2024-10-15 13:49:21.629960] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:08.015 [2024-10-15 13:49:21.629970] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:08.015 [2024-10-15 13:49:21.638490] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:08.015 [2024-10-15 13:49:21.638512] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:08.015 [2024-10-15 13:49:21.645258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:08.015 [2024-10-15 13:49:21.656321] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:08.015 [2024-10-15 13:49:21.675262] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:08.015 13:49:21 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.015 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:08.015 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:08.015 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:08.015 13:49:21 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.015 13:49:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.015 13:49:21 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.015 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:08.015 { 00:15:08.015 "ublk_device": "/dev/ublkb0", 00:15:08.015 "id": 0, 00:15:08.015 "queue_depth": 512, 00:15:08.015 "num_queues": 4, 00:15:08.015 "bdev_name": "Malloc0" 00:15:08.015 } 00:15:08.015 ]' 00:15:08.015 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:08.015 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:08.015 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:08.015 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:08.015 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:08.015 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:08.015 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:08.304 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:08.304 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:08.304 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:08.304 13:49:21 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:08.304 13:49:21 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:08.304 13:49:21 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:08.304 13:49:21 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:08.304 13:49:21 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:08.304 13:49:21 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:08.304 13:49:21 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:08.304 13:49:21 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:08.304 13:49:21 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:08.304 13:49:21 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:08.304 13:49:21 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:08.304 13:49:21 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:08.304 fio: verification read phase will never start because write phase uses all of runtime 00:15:08.304 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:08.304 fio-3.35 00:15:08.304 Starting 1 process 00:15:20.518 00:15:20.518 fio_test: (groupid=0, jobs=1): err= 0: pid=70995: Tue Oct 15 13:49:32 2024 00:15:20.518 write: IOPS=15.7k, BW=61.2MiB/s (64.1MB/s)(612MiB/10001msec); 0 zone resets 00:15:20.518 clat (usec): min=36, max=4097, avg=63.03, stdev=95.72 00:15:20.518 lat (usec): min=36, max=4098, avg=63.51, stdev=95.73 00:15:20.518 clat percentiles (usec): 00:15:20.518 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 53], 00:15:20.518 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 61], 00:15:20.518 | 70.00th=[ 62], 80.00th=[ 65], 90.00th=[ 70], 95.00th=[ 75], 00:15:20.518 | 99.00th=[ 89], 99.50th=[ 100], 99.90th=[ 1942], 99.95th=[ 2737], 00:15:20.518 | 99.99th=[ 3589] 00:15:20.518 bw ( KiB/s): min=53936, max=67320, per=99.28%, avg=62176.84, stdev=3969.65, samples=19 00:15:20.518 iops : min=13484, max=16830, avg=15544.21, stdev=992.41, samples=19 00:15:20.518 lat (usec) : 50=9.25%, 100=90.26%, 250=0.26%, 500=0.05%, 750=0.01% 00:15:20.518 lat (usec) : 1000=0.02% 00:15:20.518 lat (msec) : 2=0.06%, 4=0.09%, 10=0.01% 00:15:20.518 cpu : usr=2.61%, sys=15.03%, ctx=156591, majf=0, minf=796 00:15:20.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:20.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.518 issued rwts: total=0,156588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:20.518 00:15:20.518 Run status group 0 (all jobs): 00:15:20.518 WRITE: bw=61.2MiB/s (64.1MB/s), 61.2MiB/s-61.2MiB/s (64.1MB/s-64.1MB/s), io=612MiB (641MB), run=10001-10001msec 00:15:20.518 00:15:20.518 Disk stats (read/write): 00:15:20.518 ublkb0: ios=0/154709, merge=0/0, ticks=0/8068, in_queue=8068, util=99.09% 00:15:20.518 13:49:32 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 [2024-10-15 13:49:32.096423] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:20.519 [2024-10-15 13:49:32.134744] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:20.519 [2024-10-15 13:49:32.135629] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:20.519 [2024-10-15 13:49:32.142278] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:20.519 [2024-10-15 13:49:32.142525] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:20.519 [2024-10-15 13:49:32.142541] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.519 13:49:32 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 [2024-10-15 13:49:32.157315] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:20.519 request: 00:15:20.519 { 00:15:20.519 "ublk_id": 0, 00:15:20.519 "method": "ublk_stop_disk", 00:15:20.519 "req_id": 1 00:15:20.519 } 00:15:20.519 Got JSON-RPC error response 00:15:20.519 response: 00:15:20.519 { 00:15:20.519 "code": -19, 00:15:20.519 "message": "No such device" 00:15:20.519 } 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:20.519 13:49:32 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 [2024-10-15 13:49:32.173309] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:20.519 [2024-10-15 13:49:32.177143] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:20.519 [2024-10-15 13:49:32.177180] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.519 13:49:32 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.519 13:49:32 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:20.519 13:49:32 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.519 13:49:32 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:20.519 13:49:32 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:20.519 13:49:32 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:20.519 13:49:32 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.519 13:49:32 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:20.519 13:49:32 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:20.519 13:49:32 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:20.519 00:15:20.519 real 0m11.254s 00:15:20.519 user 0m0.544s 00:15:20.519 sys 0m1.597s 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.519 13:49:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 ************************************ 00:15:20.519 END TEST test_create_ublk 00:15:20.519 ************************************ 00:15:20.519 13:49:32 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:20.519 13:49:32 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:20.519 13:49:32 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.519 13:49:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 ************************************ 00:15:20.519 START TEST test_create_multi_ublk 00:15:20.519 ************************************ 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 [2024-10-15 13:49:32.713239] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:20.519 [2024-10-15 13:49:32.715021] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.519 13:49:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 [2024-10-15 13:49:32.968358] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:20.519 [2024-10-15 13:49:32.968702] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:20.519 [2024-10-15 13:49:32.968716] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:20.519 [2024-10-15 13:49:32.968726] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:20.519 [2024-10-15 13:49:32.980251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:20.519 [2024-10-15 13:49:32.980273] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:20.519 [2024-10-15 13:49:32.992238] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:20.519 [2024-10-15 13:49:32.992776] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:20.519 [2024-10-15 13:49:33.040243] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 [2024-10-15 13:49:33.264359] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:20.519 [2024-10-15 13:49:33.264700] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:20.519 [2024-10-15 13:49:33.264716] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:20.519 [2024-10-15 13:49:33.264723] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:20.519 [2024-10-15 13:49:33.272255] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:20.519 [2024-10-15 13:49:33.272272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:20.519 [2024-10-15 13:49:33.280245] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:20.519 [2024-10-15 13:49:33.280780] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:20.519 [2024-10-15 13:49:33.284251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.519 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.519 [2024-10-15 13:49:33.468364] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:20.519 [2024-10-15 13:49:33.468705] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:20.519 [2024-10-15 13:49:33.468717] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:20.519 [2024-10-15 13:49:33.468725] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:20.520 [2024-10-15 13:49:33.476262] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:20.520 [2024-10-15 13:49:33.476283] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:20.520 [2024-10-15 13:49:33.484246] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:20.520 [2024-10-15 13:49:33.484790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:20.520 [2024-10-15 13:49:33.493269] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.520 [2024-10-15 13:49:33.676360] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:20.520 [2024-10-15 13:49:33.676698] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:20.520 [2024-10-15 13:49:33.676712] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:20.520 [2024-10-15 13:49:33.676718] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:20.520 [2024-10-15 13:49:33.684257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:20.520 [2024-10-15 13:49:33.684274] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:20.520 [2024-10-15 13:49:33.692247] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:20.520 [2024-10-15 13:49:33.692797] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:20.520 [2024-10-15 13:49:33.701272] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:20.520 { 00:15:20.520 "ublk_device": "/dev/ublkb0", 00:15:20.520 "id": 0, 00:15:20.520 "queue_depth": 512, 00:15:20.520 "num_queues": 4, 00:15:20.520 "bdev_name": "Malloc0" 00:15:20.520 }, 00:15:20.520 { 00:15:20.520 "ublk_device": "/dev/ublkb1", 00:15:20.520 "id": 1, 00:15:20.520 "queue_depth": 512, 00:15:20.520 "num_queues": 4, 00:15:20.520 "bdev_name": "Malloc1" 00:15:20.520 }, 00:15:20.520 { 00:15:20.520 "ublk_device": "/dev/ublkb2", 00:15:20.520 "id": 2, 00:15:20.520 "queue_depth": 512, 00:15:20.520 "num_queues": 4, 00:15:20.520 "bdev_name": "Malloc2" 00:15:20.520 }, 00:15:20.520 { 00:15:20.520 "ublk_device": "/dev/ublkb3", 00:15:20.520 "id": 3, 00:15:20.520 "queue_depth": 512, 00:15:20.520 "num_queues": 4, 00:15:20.520 "bdev_name": "Malloc3" 00:15:20.520 } 00:15:20.520 ]' 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:20.520 13:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:20.520 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.814 [2024-10-15 13:49:34.396365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:20.814 [2024-10-15 13:49:34.448295] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:20.814 [2024-10-15 13:49:34.449026] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:20.814 [2024-10-15 13:49:34.454275] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:20.814 [2024-10-15 13:49:34.454524] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:20.814 [2024-10-15 13:49:34.454533] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.814 [2024-10-15 13:49:34.472311] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:20.814 [2024-10-15 13:49:34.504289] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:20.814 [2024-10-15 13:49:34.504946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:20.814 [2024-10-15 13:49:34.512264] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:20.814 [2024-10-15 13:49:34.512504] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:20.814 [2024-10-15 13:49:34.512512] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.814 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.814 [2024-10-15 13:49:34.528337] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:20.814 [2024-10-15 13:49:34.563280] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:20.814 [2024-10-15 13:49:34.563921] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:20.814 [2024-10-15 13:49:34.572286] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:20.814 [2024-10-15 13:49:34.572530] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:20.814 [2024-10-15 13:49:34.572539] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:21.073 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.073 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:21.073 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:21.073 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.073 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:21.073 [2024-10-15 13:49:34.587323] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:21.073 [2024-10-15 13:49:34.622816] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:21.073 [2024-10-15 13:49:34.623615] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:21.073 [2024-10-15 13:49:34.630247] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:21.073 [2024-10-15 13:49:34.630479] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:21.073 [2024-10-15 13:49:34.630488] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:21.073 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.073 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:21.073 [2024-10-15 13:49:34.822327] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:21.073 [2024-10-15 13:49:34.826120] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:21.073 [2024-10-15 13:49:34.826153] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:21.073 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:21.073 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:21.073 13:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:21.073 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.073 13:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:21.638 13:49:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.638 13:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:21.638 13:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:21.638 13:49:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.638 13:49:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:21.896 13:49:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.896 13:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:21.896 13:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:21.896 13:49:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.896 13:49:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:22.461 13:49:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.461 13:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:22.461 13:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:22.461 13:49:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.461 13:49:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:22.720 ************************************ 00:15:22.720 END TEST test_create_multi_ublk 00:15:22.720 ************************************ 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:22.720 00:15:22.720 real 0m3.763s 00:15:22.720 user 0m0.800s 00:15:22.720 sys 0m0.174s 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:22.720 13:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:22.720 13:49:36 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:22.720 13:49:36 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:22.720 13:49:36 ublk -- ublk/ublk.sh@130 -- # killprocess 70951 00:15:22.720 13:49:36 ublk -- common/autotest_common.sh@950 -- # '[' -z 70951 ']' 00:15:22.720 13:49:36 ublk -- common/autotest_common.sh@954 -- # kill -0 70951 00:15:22.720 13:49:36 ublk -- common/autotest_common.sh@955 -- # uname 00:15:22.720 13:49:36 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:22.720 13:49:36 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70951 00:15:22.978 killing process with pid 70951 00:15:22.978 13:49:36 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:22.978 13:49:36 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:22.978 13:49:36 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70951' 00:15:22.978 13:49:36 ublk -- common/autotest_common.sh@969 -- # kill 70951 00:15:22.978 13:49:36 ublk -- common/autotest_common.sh@974 -- # wait 70951 00:15:23.544 [2024-10-15 13:49:37.275840] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:23.544 [2024-10-15 13:49:37.276075] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:24.477 00:15:24.477 real 0m25.845s 00:15:24.477 user 0m36.414s 00:15:24.477 sys 0m10.438s 00:15:24.477 13:49:37 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:24.477 13:49:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:24.477 ************************************ 00:15:24.477 END TEST ublk 00:15:24.477 ************************************ 00:15:24.477 13:49:38 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:24.477 13:49:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:24.477 13:49:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:24.477 13:49:38 -- common/autotest_common.sh@10 -- # set +x 00:15:24.477 ************************************ 00:15:24.477 START TEST ublk_recovery 00:15:24.477 ************************************ 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:24.477 * Looking for test storage... 00:15:24.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.477 13:49:38 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:24.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.477 --rc genhtml_branch_coverage=1 00:15:24.477 --rc genhtml_function_coverage=1 00:15:24.477 --rc genhtml_legend=1 00:15:24.477 --rc geninfo_all_blocks=1 00:15:24.477 --rc geninfo_unexecuted_blocks=1 00:15:24.477 00:15:24.477 ' 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:24.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.477 --rc genhtml_branch_coverage=1 00:15:24.477 --rc genhtml_function_coverage=1 00:15:24.477 --rc genhtml_legend=1 00:15:24.477 --rc geninfo_all_blocks=1 00:15:24.477 --rc geninfo_unexecuted_blocks=1 00:15:24.477 00:15:24.477 ' 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:24.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.477 --rc genhtml_branch_coverage=1 00:15:24.477 --rc genhtml_function_coverage=1 00:15:24.477 --rc genhtml_legend=1 00:15:24.477 --rc geninfo_all_blocks=1 00:15:24.477 --rc geninfo_unexecuted_blocks=1 00:15:24.477 00:15:24.477 ' 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:24.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.477 --rc genhtml_branch_coverage=1 00:15:24.477 --rc genhtml_function_coverage=1 00:15:24.477 --rc genhtml_legend=1 00:15:24.477 --rc geninfo_all_blocks=1 00:15:24.477 --rc geninfo_unexecuted_blocks=1 00:15:24.477 00:15:24.477 ' 00:15:24.477 13:49:38 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:24.477 13:49:38 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:24.477 13:49:38 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:24.477 13:49:38 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:24.477 13:49:38 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:24.477 13:49:38 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:24.477 13:49:38 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:24.477 13:49:38 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:24.477 13:49:38 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:24.477 13:49:38 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:24.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.477 13:49:38 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71352 00:15:24.477 13:49:38 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:24.477 13:49:38 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71352 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71352 ']' 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.477 13:49:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.477 13:49:38 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:24.477 [2024-10-15 13:49:38.254425] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:15:24.477 [2024-10-15 13:49:38.254561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71352 ] 00:15:24.735 [2024-10-15 13:49:38.404628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:24.735 [2024-10-15 13:49:38.505063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.735 [2024-10-15 13:49:38.505140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.300 13:49:39 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.300 13:49:39 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:25.300 13:49:39 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:25.300 13:49:39 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.300 13:49:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.558 [2024-10-15 13:49:39.088241] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:25.558 [2024-10-15 13:49:39.089992] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:25.558 13:49:39 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.558 13:49:39 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:25.558 13:49:39 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.558 13:49:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.558 malloc0 00:15:25.558 13:49:39 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.558 13:49:39 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:25.558 13:49:39 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.558 13:49:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.558 [2024-10-15 13:49:39.183370] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:25.558 [2024-10-15 13:49:39.183460] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:25.558 [2024-10-15 13:49:39.183470] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:25.558 [2024-10-15 13:49:39.183476] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:25.558 [2024-10-15 13:49:39.191260] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:25.558 [2024-10-15 13:49:39.191278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:25.558 [2024-10-15 13:49:39.199252] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:25.558 [2024-10-15 13:49:39.199380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:25.558 [2024-10-15 13:49:39.221251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:25.558 1 00:15:25.558 13:49:39 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.558 13:49:39 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:26.490 13:49:40 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71382 00:15:26.490 13:49:40 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:26.490 13:49:40 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:26.748 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:26.748 fio-3.35 00:15:26.748 Starting 1 process 00:15:32.006 13:49:45 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71352 00:15:32.006 13:49:45 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:37.263 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71352 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:37.263 13:49:50 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71497 00:15:37.263 13:49:50 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:37.263 13:49:50 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:37.263 13:49:50 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71497 00:15:37.263 13:49:50 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71497 ']' 00:15:37.263 13:49:50 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.263 13:49:50 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.263 13:49:50 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.263 13:49:50 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.263 13:49:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.263 [2024-10-15 13:49:50.308478] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:15:37.263 [2024-10-15 13:49:50.308570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71497 ] 00:15:37.263 [2024-10-15 13:49:50.453585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:37.263 [2024-10-15 13:49:50.554474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.263 [2024-10-15 13:49:50.554537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.520 13:49:51 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:37.520 13:49:51 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:37.520 13:49:51 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:37.520 13:49:51 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.520 13:49:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.520 [2024-10-15 13:49:51.165243] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:37.520 [2024-10-15 13:49:51.167105] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:37.520 13:49:51 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.520 13:49:51 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:37.520 13:49:51 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.520 13:49:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.520 malloc0 00:15:37.520 13:49:51 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.520 13:49:51 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:37.520 13:49:51 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.520 13:49:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.520 [2024-10-15 13:49:51.266394] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:37.520 [2024-10-15 13:49:51.266432] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:37.520 [2024-10-15 13:49:51.266442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:37.520 [2024-10-15 13:49:51.274288] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:37.520 [2024-10-15 13:49:51.274318] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:15:37.520 [2024-10-15 13:49:51.274326] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:37.520 [2024-10-15 13:49:51.274404] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:37.520 1 00:15:37.520 13:49:51 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.520 13:49:51 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71382 00:15:37.520 [2024-10-15 13:49:51.282252] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:37.520 [2024-10-15 13:49:51.288830] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:37.520 [2024-10-15 13:49:51.296464] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:37.520 [2024-10-15 13:49:51.296490] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:33.728 00:16:33.728 fio_test: (groupid=0, jobs=1): err= 0: pid=71385: Tue Oct 15 13:50:40 2024 00:16:33.728 read: IOPS=23.9k, BW=93.3MiB/s (97.8MB/s)(5596MiB/60001msec) 00:16:33.728 slat (nsec): min=1051, max=245005, avg=5433.68, stdev=1821.67 00:16:33.728 clat (usec): min=585, max=6069.6k, avg=2642.82, stdev=41170.81 00:16:33.728 lat (usec): min=589, max=6069.6k, avg=2648.25, stdev=41170.81 00:16:33.728 clat percentiles (usec): 00:16:33.728 | 1.00th=[ 1811], 5.00th=[ 1893], 10.00th=[ 1991], 20.00th=[ 2114], 00:16:33.728 | 30.00th=[ 2180], 40.00th=[ 2212], 50.00th=[ 2278], 60.00th=[ 2311], 00:16:33.728 | 70.00th=[ 2343], 80.00th=[ 2409], 90.00th=[ 2671], 95.00th=[ 3294], 00:16:33.728 | 99.00th=[ 5014], 99.50th=[ 5866], 99.90th=[ 7373], 99.95th=[ 8455], 00:16:33.728 | 99.99th=[13173] 00:16:33.728 bw ( KiB/s): min=12440, max=129344, per=100.00%, avg=105242.17, stdev=14617.54, samples=108 00:16:33.728 iops : min= 3110, max=32336, avg=26310.54, stdev=3654.39, samples=108 00:16:33.728 write: IOPS=23.8k, BW=93.2MiB/s (97.7MB/s)(5590MiB/60001msec); 0 zone resets 00:16:33.728 slat (nsec): min=1094, max=277587, avg=5649.16, stdev=1771.41 00:16:33.728 clat (usec): min=609, max=6069.7k, avg=2709.14, stdev=39927.72 00:16:33.728 lat (usec): min=613, max=6069.7k, avg=2714.79, stdev=39927.71 00:16:33.728 clat percentiles (usec): 00:16:33.728 | 1.00th=[ 1876], 5.00th=[ 1975], 10.00th=[ 2073], 20.00th=[ 2212], 00:16:33.728 | 30.00th=[ 2278], 40.00th=[ 2311], 50.00th=[ 2376], 60.00th=[ 2409], 00:16:33.728 | 70.00th=[ 2442], 80.00th=[ 2474], 90.00th=[ 2737], 95.00th=[ 3261], 00:16:33.728 | 99.00th=[ 4948], 99.50th=[ 5997], 99.90th=[ 7308], 99.95th=[ 8455], 00:16:33.728 | 99.99th=[13173] 00:16:33.728 bw ( KiB/s): min=13240, max=127872, per=100.00%, avg=105116.86, stdev=14469.70, samples=108 00:16:33.728 iops : min= 3310, max=31968, avg=26279.21, stdev=3617.43, samples=108 00:16:33.728 lat (usec) : 750=0.01%, 1000=0.01% 00:16:33.728 lat (msec) : 2=8.54%, 4=88.59%, 10=2.85%, 20=0.02%, >=2000=0.01% 00:16:33.728 cpu : usr=5.66%, sys=27.18%, ctx=92877, majf=0, minf=14 00:16:33.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:33.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.728 issued rwts: total=1432604,1430949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.728 00:16:33.728 Run status group 0 (all jobs): 00:16:33.728 READ: bw=93.3MiB/s (97.8MB/s), 93.3MiB/s-93.3MiB/s (97.8MB/s-97.8MB/s), io=5596MiB (5868MB), run=60001-60001msec 00:16:33.728 WRITE: bw=93.2MiB/s (97.7MB/s), 93.2MiB/s-93.2MiB/s (97.7MB/s-97.7MB/s), io=5590MiB (5861MB), run=60001-60001msec 00:16:33.728 00:16:33.728 Disk stats (read/write): 00:16:33.728 ublkb1: ios=1429876/1428128, merge=0/0, ticks=3687689/3657325, in_queue=7345015, util=99.90% 00:16:33.728 13:50:40 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.728 [2024-10-15 13:50:40.487244] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:33.728 [2024-10-15 13:50:40.525386] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:33.728 [2024-10-15 13:50:40.525547] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:33.728 [2024-10-15 13:50:40.529521] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:33.728 [2024-10-15 13:50:40.529626] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:33.728 [2024-10-15 13:50:40.529635] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.728 13:50:40 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.728 [2024-10-15 13:50:40.547334] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:33.728 [2024-10-15 13:50:40.556245] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:33.728 [2024-10-15 13:50:40.556281] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.728 13:50:40 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:33.728 13:50:40 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:33.728 13:50:40 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71497 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 71497 ']' 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 71497 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71497 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:33.728 killing process with pid 71497 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71497' 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@969 -- # kill 71497 00:16:33.728 13:50:40 ublk_recovery -- common/autotest_common.sh@974 -- # wait 71497 00:16:33.728 [2024-10-15 13:50:41.649159] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:33.728 [2024-10-15 13:50:41.649208] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:33.728 00:16:33.728 real 1m4.378s 00:16:33.728 user 1m40.488s 00:16:33.728 sys 0m37.181s 00:16:33.728 13:50:42 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.728 13:50:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.728 ************************************ 00:16:33.728 END TEST ublk_recovery 00:16:33.728 ************************************ 00:16:33.728 13:50:42 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:33.728 13:50:42 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:33.728 13:50:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:33.728 13:50:42 -- common/autotest_common.sh@10 -- # set +x 00:16:33.728 13:50:42 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:33.728 13:50:42 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:33.728 13:50:42 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:16:33.728 13:50:42 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:16:33.728 13:50:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:33.728 13:50:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:33.728 13:50:42 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:33.728 13:50:42 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:16:33.729 13:50:42 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:33.729 13:50:42 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:16:33.729 13:50:42 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:33.729 13:50:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:33.729 13:50:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.729 13:50:42 -- common/autotest_common.sh@10 -- # set +x 00:16:33.729 ************************************ 00:16:33.729 START TEST ftl 00:16:33.729 ************************************ 00:16:33.729 13:50:42 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:33.729 * Looking for test storage... 00:16:33.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.729 13:50:42 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:33.729 13:50:42 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:16:33.729 13:50:42 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:33.729 13:50:42 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:33.729 13:50:42 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.729 13:50:42 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.729 13:50:42 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.729 13:50:42 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.729 13:50:42 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.729 13:50:42 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.729 13:50:42 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.729 13:50:42 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.729 13:50:42 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.729 13:50:42 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.729 13:50:42 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.729 13:50:42 ftl -- scripts/common.sh@344 -- # case "$op" in 00:16:33.729 13:50:42 ftl -- scripts/common.sh@345 -- # : 1 00:16:33.729 13:50:42 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.729 13:50:42 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.729 13:50:42 ftl -- scripts/common.sh@365 -- # decimal 1 00:16:33.729 13:50:42 ftl -- scripts/common.sh@353 -- # local d=1 00:16:33.729 13:50:42 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.729 13:50:42 ftl -- scripts/common.sh@355 -- # echo 1 00:16:33.729 13:50:42 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.729 13:50:42 ftl -- scripts/common.sh@366 -- # decimal 2 00:16:33.729 13:50:42 ftl -- scripts/common.sh@353 -- # local d=2 00:16:33.729 13:50:42 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.729 13:50:42 ftl -- scripts/common.sh@355 -- # echo 2 00:16:33.729 13:50:42 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.729 13:50:42 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.729 13:50:42 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.729 13:50:42 ftl -- scripts/common.sh@368 -- # return 0 00:16:33.729 13:50:42 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.729 13:50:42 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:33.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.729 --rc genhtml_branch_coverage=1 00:16:33.729 --rc genhtml_function_coverage=1 00:16:33.729 --rc genhtml_legend=1 00:16:33.729 --rc geninfo_all_blocks=1 00:16:33.729 --rc geninfo_unexecuted_blocks=1 00:16:33.729 00:16:33.729 ' 00:16:33.729 13:50:42 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:33.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.729 --rc genhtml_branch_coverage=1 00:16:33.729 --rc genhtml_function_coverage=1 00:16:33.729 --rc genhtml_legend=1 00:16:33.729 --rc geninfo_all_blocks=1 00:16:33.729 --rc geninfo_unexecuted_blocks=1 00:16:33.729 00:16:33.729 ' 00:16:33.729 13:50:42 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:33.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.729 --rc genhtml_branch_coverage=1 00:16:33.729 --rc genhtml_function_coverage=1 00:16:33.729 --rc genhtml_legend=1 00:16:33.729 --rc geninfo_all_blocks=1 00:16:33.729 --rc geninfo_unexecuted_blocks=1 00:16:33.729 00:16:33.729 ' 00:16:33.729 13:50:42 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:33.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.729 --rc genhtml_branch_coverage=1 00:16:33.729 --rc genhtml_function_coverage=1 00:16:33.729 --rc genhtml_legend=1 00:16:33.729 --rc geninfo_all_blocks=1 00:16:33.729 --rc geninfo_unexecuted_blocks=1 00:16:33.729 00:16:33.729 ' 00:16:33.729 13:50:42 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:33.729 13:50:42 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:33.729 13:50:42 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.729 13:50:42 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.729 13:50:42 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:33.729 13:50:42 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:33.729 13:50:42 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.729 13:50:42 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:33.729 13:50:42 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:33.729 13:50:42 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.729 13:50:42 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.729 13:50:42 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:33.729 13:50:42 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:33.729 13:50:42 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:33.729 13:50:42 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:33.729 13:50:42 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:33.729 13:50:42 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:33.729 13:50:42 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.729 13:50:42 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.729 13:50:42 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:33.729 13:50:42 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:33.729 13:50:42 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:33.729 13:50:42 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:33.729 13:50:42 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:33.729 13:50:42 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:33.729 13:50:42 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:33.729 13:50:42 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:33.729 13:50:42 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:33.729 13:50:42 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:33.729 13:50:42 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.729 13:50:42 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:33.729 13:50:42 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:33.729 13:50:42 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:33.729 13:50:42 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:33.729 13:50:42 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:33.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:33.729 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:33.729 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:33.729 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:33.729 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:33.729 13:50:43 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72302 00:16:33.729 13:50:43 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72302 00:16:33.729 13:50:43 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:33.729 13:50:43 ftl -- common/autotest_common.sh@831 -- # '[' -z 72302 ']' 00:16:33.729 13:50:43 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.729 13:50:43 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.729 13:50:43 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.729 13:50:43 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.729 13:50:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:33.729 [2024-10-15 13:50:43.147059] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:16:33.729 [2024-10-15 13:50:43.147173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72302 ] 00:16:33.729 [2024-10-15 13:50:43.297248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.729 [2024-10-15 13:50:43.394480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.729 13:50:43 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:33.729 13:50:43 ftl -- common/autotest_common.sh@864 -- # return 0 00:16:33.729 13:50:43 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:33.729 13:50:44 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:33.729 13:50:44 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:33.729 13:50:44 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:33.729 13:50:45 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:33.729 13:50:45 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:33.729 13:50:45 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:33.729 13:50:45 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:33.729 13:50:45 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:33.729 13:50:45 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:33.729 13:50:45 ftl -- ftl/ftl.sh@50 -- # break 00:16:33.729 13:50:45 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:33.729 13:50:45 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:33.729 13:50:45 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:33.730 13:50:45 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:33.730 13:50:45 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:33.730 13:50:45 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:33.730 13:50:45 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:33.730 13:50:45 ftl -- ftl/ftl.sh@63 -- # break 00:16:33.730 13:50:45 ftl -- ftl/ftl.sh@66 -- # killprocess 72302 00:16:33.730 13:50:45 ftl -- common/autotest_common.sh@950 -- # '[' -z 72302 ']' 00:16:33.730 13:50:45 ftl -- common/autotest_common.sh@954 -- # kill -0 72302 00:16:33.730 13:50:45 ftl -- common/autotest_common.sh@955 -- # uname 00:16:33.730 13:50:45 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.730 13:50:45 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72302 00:16:33.730 killing process with pid 72302 00:16:33.730 13:50:45 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:33.730 13:50:45 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:33.730 13:50:45 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72302' 00:16:33.730 13:50:45 ftl -- common/autotest_common.sh@969 -- # kill 72302 00:16:33.730 13:50:45 ftl -- common/autotest_common.sh@974 -- # wait 72302 00:16:33.730 13:50:47 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:33.730 13:50:47 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:33.730 13:50:47 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:33.730 13:50:47 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.730 13:50:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:33.730 ************************************ 00:16:33.730 START TEST ftl_fio_basic 00:16:33.730 ************************************ 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:33.730 * Looking for test storage... 00:16:33.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:33.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.730 --rc genhtml_branch_coverage=1 00:16:33.730 --rc genhtml_function_coverage=1 00:16:33.730 --rc genhtml_legend=1 00:16:33.730 --rc geninfo_all_blocks=1 00:16:33.730 --rc geninfo_unexecuted_blocks=1 00:16:33.730 00:16:33.730 ' 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:33.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.730 --rc genhtml_branch_coverage=1 00:16:33.730 --rc genhtml_function_coverage=1 00:16:33.730 --rc genhtml_legend=1 00:16:33.730 --rc geninfo_all_blocks=1 00:16:33.730 --rc geninfo_unexecuted_blocks=1 00:16:33.730 00:16:33.730 ' 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:33.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.730 --rc genhtml_branch_coverage=1 00:16:33.730 --rc genhtml_function_coverage=1 00:16:33.730 --rc genhtml_legend=1 00:16:33.730 --rc geninfo_all_blocks=1 00:16:33.730 --rc geninfo_unexecuted_blocks=1 00:16:33.730 00:16:33.730 ' 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:33.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.730 --rc genhtml_branch_coverage=1 00:16:33.730 --rc genhtml_function_coverage=1 00:16:33.730 --rc genhtml_legend=1 00:16:33.730 --rc geninfo_all_blocks=1 00:16:33.730 --rc geninfo_unexecuted_blocks=1 00:16:33.730 00:16:33.730 ' 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:33.730 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72430 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72430 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 72430 ']' 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.731 13:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:33.731 [2024-10-15 13:50:47.301603] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:16:33.731 [2024-10-15 13:50:47.301864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72430 ] 00:16:33.731 [2024-10-15 13:50:47.458370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:33.989 [2024-10-15 13:50:47.543930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.989 [2024-10-15 13:50:47.543841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.989 [2024-10-15 13:50:47.543955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.561 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.561 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:16:34.561 13:50:48 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:34.561 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:34.561 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:34.561 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:34.561 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:34.561 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:34.821 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:34.821 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:34.821 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:34.821 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:34.821 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:34.821 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:34.821 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:34.821 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:35.083 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:35.083 { 00:16:35.083 "name": "nvme0n1", 00:16:35.083 "aliases": [ 00:16:35.083 "af6499d7-bdb8-4814-ab58-df4d8ef6b6be" 00:16:35.083 ], 00:16:35.083 "product_name": "NVMe disk", 00:16:35.083 "block_size": 4096, 00:16:35.083 "num_blocks": 1310720, 00:16:35.083 "uuid": "af6499d7-bdb8-4814-ab58-df4d8ef6b6be", 00:16:35.083 "numa_id": -1, 00:16:35.083 "assigned_rate_limits": { 00:16:35.083 "rw_ios_per_sec": 0, 00:16:35.083 "rw_mbytes_per_sec": 0, 00:16:35.083 "r_mbytes_per_sec": 0, 00:16:35.083 "w_mbytes_per_sec": 0 00:16:35.083 }, 00:16:35.083 "claimed": false, 00:16:35.083 "zoned": false, 00:16:35.083 "supported_io_types": { 00:16:35.083 "read": true, 00:16:35.083 "write": true, 00:16:35.083 "unmap": true, 00:16:35.083 "flush": true, 00:16:35.083 "reset": true, 00:16:35.083 "nvme_admin": true, 00:16:35.083 "nvme_io": true, 00:16:35.083 "nvme_io_md": false, 00:16:35.083 "write_zeroes": true, 00:16:35.083 "zcopy": false, 00:16:35.083 "get_zone_info": false, 00:16:35.083 "zone_management": false, 00:16:35.083 "zone_append": false, 00:16:35.083 "compare": true, 00:16:35.083 "compare_and_write": false, 00:16:35.083 "abort": true, 00:16:35.083 "seek_hole": false, 00:16:35.083 "seek_data": false, 00:16:35.083 "copy": true, 00:16:35.083 "nvme_iov_md": false 00:16:35.083 }, 00:16:35.083 "driver_specific": { 00:16:35.083 "nvme": [ 00:16:35.083 { 00:16:35.083 "pci_address": "0000:00:11.0", 00:16:35.083 "trid": { 00:16:35.083 "trtype": "PCIe", 00:16:35.083 "traddr": "0000:00:11.0" 00:16:35.083 }, 00:16:35.083 "ctrlr_data": { 00:16:35.083 "cntlid": 0, 00:16:35.083 "vendor_id": "0x1b36", 00:16:35.083 "model_number": "QEMU NVMe Ctrl", 00:16:35.083 "serial_number": "12341", 00:16:35.083 "firmware_revision": "8.0.0", 00:16:35.083 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:35.083 "oacs": { 00:16:35.083 "security": 0, 00:16:35.083 "format": 1, 00:16:35.083 "firmware": 0, 00:16:35.083 "ns_manage": 1 00:16:35.083 }, 00:16:35.083 "multi_ctrlr": false, 00:16:35.083 "ana_reporting": false 00:16:35.083 }, 00:16:35.083 "vs": { 00:16:35.083 "nvme_version": "1.4" 00:16:35.083 }, 00:16:35.083 "ns_data": { 00:16:35.083 "id": 1, 00:16:35.083 "can_share": false 00:16:35.083 } 00:16:35.083 } 00:16:35.083 ], 00:16:35.083 "mp_policy": "active_passive" 00:16:35.083 } 00:16:35.083 } 00:16:35.083 ]' 00:16:35.083 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:35.083 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:35.083 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:35.083 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:35.083 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:35.083 13:50:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:16:35.083 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:35.083 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:35.083 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:35.083 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:35.083 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:35.343 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:35.343 13:50:48 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:35.343 13:50:49 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=1aac743c-243c-467d-bfbf-6f81abbcfbac 00:16:35.343 13:50:49 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1aac743c-243c-467d-bfbf-6f81abbcfbac 00:16:35.603 13:50:49 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=6169ff4e-ae07-4d03-bd51-dad6e745386d 00:16:35.603 13:50:49 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6169ff4e-ae07-4d03-bd51-dad6e745386d 00:16:35.603 13:50:49 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:35.603 13:50:49 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:35.603 13:50:49 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=6169ff4e-ae07-4d03-bd51-dad6e745386d 00:16:35.603 13:50:49 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:35.603 13:50:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 6169ff4e-ae07-4d03-bd51-dad6e745386d 00:16:35.603 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=6169ff4e-ae07-4d03-bd51-dad6e745386d 00:16:35.603 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:35.603 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:35.603 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:35.603 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6169ff4e-ae07-4d03-bd51-dad6e745386d 00:16:35.864 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:35.864 { 00:16:35.864 "name": "6169ff4e-ae07-4d03-bd51-dad6e745386d", 00:16:35.864 "aliases": [ 00:16:35.864 "lvs/nvme0n1p0" 00:16:35.864 ], 00:16:35.864 "product_name": "Logical Volume", 00:16:35.864 "block_size": 4096, 00:16:35.864 "num_blocks": 26476544, 00:16:35.864 "uuid": "6169ff4e-ae07-4d03-bd51-dad6e745386d", 00:16:35.864 "assigned_rate_limits": { 00:16:35.864 "rw_ios_per_sec": 0, 00:16:35.864 "rw_mbytes_per_sec": 0, 00:16:35.864 "r_mbytes_per_sec": 0, 00:16:35.864 "w_mbytes_per_sec": 0 00:16:35.864 }, 00:16:35.864 "claimed": false, 00:16:35.864 "zoned": false, 00:16:35.864 "supported_io_types": { 00:16:35.864 "read": true, 00:16:35.864 "write": true, 00:16:35.864 "unmap": true, 00:16:35.864 "flush": false, 00:16:35.864 "reset": true, 00:16:35.864 "nvme_admin": false, 00:16:35.864 "nvme_io": false, 00:16:35.864 "nvme_io_md": false, 00:16:35.864 "write_zeroes": true, 00:16:35.864 "zcopy": false, 00:16:35.864 "get_zone_info": false, 00:16:35.864 "zone_management": false, 00:16:35.864 "zone_append": false, 00:16:35.864 "compare": false, 00:16:35.864 "compare_and_write": false, 00:16:35.864 "abort": false, 00:16:35.864 "seek_hole": true, 00:16:35.864 "seek_data": true, 00:16:35.864 "copy": false, 00:16:35.864 "nvme_iov_md": false 00:16:35.864 }, 00:16:35.864 "driver_specific": { 00:16:35.864 "lvol": { 00:16:35.864 "lvol_store_uuid": "1aac743c-243c-467d-bfbf-6f81abbcfbac", 00:16:35.864 "base_bdev": "nvme0n1", 00:16:35.864 "thin_provision": true, 00:16:35.864 "num_allocated_clusters": 0, 00:16:35.864 "snapshot": false, 00:16:35.864 "clone": false, 00:16:35.864 "esnap_clone": false 00:16:35.864 } 00:16:35.864 } 00:16:35.864 } 00:16:35.864 ]' 00:16:35.865 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:35.865 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:35.865 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:35.865 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:35.865 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:35.865 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:35.865 13:50:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:35.865 13:50:49 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:35.865 13:50:49 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:36.125 13:50:49 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:36.125 13:50:49 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:36.125 13:50:49 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 6169ff4e-ae07-4d03-bd51-dad6e745386d 00:16:36.125 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=6169ff4e-ae07-4d03-bd51-dad6e745386d 00:16:36.125 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:36.125 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:36.125 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:36.125 13:50:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6169ff4e-ae07-4d03-bd51-dad6e745386d 00:16:36.411 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:36.411 { 00:16:36.411 "name": "6169ff4e-ae07-4d03-bd51-dad6e745386d", 00:16:36.411 "aliases": [ 00:16:36.411 "lvs/nvme0n1p0" 00:16:36.411 ], 00:16:36.411 "product_name": "Logical Volume", 00:16:36.411 "block_size": 4096, 00:16:36.411 "num_blocks": 26476544, 00:16:36.411 "uuid": "6169ff4e-ae07-4d03-bd51-dad6e745386d", 00:16:36.411 "assigned_rate_limits": { 00:16:36.411 "rw_ios_per_sec": 0, 00:16:36.412 "rw_mbytes_per_sec": 0, 00:16:36.412 "r_mbytes_per_sec": 0, 00:16:36.412 "w_mbytes_per_sec": 0 00:16:36.412 }, 00:16:36.412 "claimed": false, 00:16:36.412 "zoned": false, 00:16:36.412 "supported_io_types": { 00:16:36.412 "read": true, 00:16:36.412 "write": true, 00:16:36.412 "unmap": true, 00:16:36.412 "flush": false, 00:16:36.412 "reset": true, 00:16:36.412 "nvme_admin": false, 00:16:36.412 "nvme_io": false, 00:16:36.412 "nvme_io_md": false, 00:16:36.412 "write_zeroes": true, 00:16:36.412 "zcopy": false, 00:16:36.412 "get_zone_info": false, 00:16:36.412 "zone_management": false, 00:16:36.412 "zone_append": false, 00:16:36.412 "compare": false, 00:16:36.412 "compare_and_write": false, 00:16:36.412 "abort": false, 00:16:36.412 "seek_hole": true, 00:16:36.412 "seek_data": true, 00:16:36.412 "copy": false, 00:16:36.412 "nvme_iov_md": false 00:16:36.412 }, 00:16:36.412 "driver_specific": { 00:16:36.412 "lvol": { 00:16:36.412 "lvol_store_uuid": "1aac743c-243c-467d-bfbf-6f81abbcfbac", 00:16:36.412 "base_bdev": "nvme0n1", 00:16:36.412 "thin_provision": true, 00:16:36.412 "num_allocated_clusters": 0, 00:16:36.412 "snapshot": false, 00:16:36.412 "clone": false, 00:16:36.412 "esnap_clone": false 00:16:36.412 } 00:16:36.412 } 00:16:36.412 } 00:16:36.412 ]' 00:16:36.412 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:36.412 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:36.412 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:36.412 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:36.412 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:36.412 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:36.412 13:50:50 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:36.412 13:50:50 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:36.682 13:50:50 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:36.682 13:50:50 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:36.682 13:50:50 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:36.682 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:36.682 13:50:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 6169ff4e-ae07-4d03-bd51-dad6e745386d 00:16:36.682 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=6169ff4e-ae07-4d03-bd51-dad6e745386d 00:16:36.682 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:36.682 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:36.682 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:36.682 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6169ff4e-ae07-4d03-bd51-dad6e745386d 00:16:36.944 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:36.944 { 00:16:36.944 "name": "6169ff4e-ae07-4d03-bd51-dad6e745386d", 00:16:36.944 "aliases": [ 00:16:36.944 "lvs/nvme0n1p0" 00:16:36.944 ], 00:16:36.944 "product_name": "Logical Volume", 00:16:36.944 "block_size": 4096, 00:16:36.944 "num_blocks": 26476544, 00:16:36.944 "uuid": "6169ff4e-ae07-4d03-bd51-dad6e745386d", 00:16:36.944 "assigned_rate_limits": { 00:16:36.944 "rw_ios_per_sec": 0, 00:16:36.944 "rw_mbytes_per_sec": 0, 00:16:36.944 "r_mbytes_per_sec": 0, 00:16:36.944 "w_mbytes_per_sec": 0 00:16:36.944 }, 00:16:36.944 "claimed": false, 00:16:36.944 "zoned": false, 00:16:36.944 "supported_io_types": { 00:16:36.944 "read": true, 00:16:36.944 "write": true, 00:16:36.944 "unmap": true, 00:16:36.944 "flush": false, 00:16:36.944 "reset": true, 00:16:36.944 "nvme_admin": false, 00:16:36.944 "nvme_io": false, 00:16:36.944 "nvme_io_md": false, 00:16:36.944 "write_zeroes": true, 00:16:36.944 "zcopy": false, 00:16:36.944 "get_zone_info": false, 00:16:36.944 "zone_management": false, 00:16:36.944 "zone_append": false, 00:16:36.944 "compare": false, 00:16:36.944 "compare_and_write": false, 00:16:36.944 "abort": false, 00:16:36.944 "seek_hole": true, 00:16:36.944 "seek_data": true, 00:16:36.944 "copy": false, 00:16:36.944 "nvme_iov_md": false 00:16:36.944 }, 00:16:36.944 "driver_specific": { 00:16:36.944 "lvol": { 00:16:36.944 "lvol_store_uuid": "1aac743c-243c-467d-bfbf-6f81abbcfbac", 00:16:36.944 "base_bdev": "nvme0n1", 00:16:36.944 "thin_provision": true, 00:16:36.944 "num_allocated_clusters": 0, 00:16:36.944 "snapshot": false, 00:16:36.944 "clone": false, 00:16:36.944 "esnap_clone": false 00:16:36.944 } 00:16:36.944 } 00:16:36.944 } 00:16:36.944 ]' 00:16:36.944 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:36.944 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:36.944 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:36.944 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:36.944 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:36.944 13:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:36.944 13:50:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:36.944 13:50:50 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:36.944 13:50:50 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6169ff4e-ae07-4d03-bd51-dad6e745386d -c nvc0n1p0 --l2p_dram_limit 60 00:16:37.206 [2024-10-15 13:50:50.847397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.206 [2024-10-15 13:50:50.847620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:37.207 [2024-10-15 13:50:50.847645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:37.207 [2024-10-15 13:50:50.847655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.207 [2024-10-15 13:50:50.847720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.207 [2024-10-15 13:50:50.847730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:37.207 [2024-10-15 13:50:50.847740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:37.207 [2024-10-15 13:50:50.847749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.207 [2024-10-15 13:50:50.847784] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:37.207 [2024-10-15 13:50:50.848520] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:37.207 [2024-10-15 13:50:50.848545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.207 [2024-10-15 13:50:50.848555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:37.207 [2024-10-15 13:50:50.848565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:16:37.207 [2024-10-15 13:50:50.848572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.207 [2024-10-15 13:50:50.848705] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 179da044-db08-4010-acda-2ac6dc566573 00:16:37.207 [2024-10-15 13:50:50.849779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.207 [2024-10-15 13:50:50.849811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:37.207 [2024-10-15 13:50:50.849821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:37.207 [2024-10-15 13:50:50.849833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.207 [2024-10-15 13:50:50.855085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.207 [2024-10-15 13:50:50.855237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:37.207 [2024-10-15 13:50:50.855254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.202 ms 00:16:37.207 [2024-10-15 13:50:50.855263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.207 [2024-10-15 13:50:50.855364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.207 [2024-10-15 13:50:50.855379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:37.207 [2024-10-15 13:50:50.855389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:37.207 [2024-10-15 13:50:50.855401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.207 [2024-10-15 13:50:50.855463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.207 [2024-10-15 13:50:50.855479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:37.207 [2024-10-15 13:50:50.855487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:37.207 [2024-10-15 13:50:50.855496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.207 [2024-10-15 13:50:50.855524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:37.207 [2024-10-15 13:50:50.859134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.207 [2024-10-15 13:50:50.859161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:37.207 [2024-10-15 13:50:50.859173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.613 ms 00:16:37.207 [2024-10-15 13:50:50.859181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.207 [2024-10-15 13:50:50.859226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.207 [2024-10-15 13:50:50.859238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:37.207 [2024-10-15 13:50:50.859248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:37.207 [2024-10-15 13:50:50.859255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.207 [2024-10-15 13:50:50.859277] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:37.207 [2024-10-15 13:50:50.859419] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:37.207 [2024-10-15 13:50:50.859443] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:37.207 [2024-10-15 13:50:50.859455] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:37.207 [2024-10-15 13:50:50.859466] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:37.207 [2024-10-15 13:50:50.859476] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:37.207 [2024-10-15 13:50:50.859485] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:37.207 [2024-10-15 13:50:50.859493] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:37.207 [2024-10-15 13:50:50.859502] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:37.207 [2024-10-15 13:50:50.859509] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:37.207 [2024-10-15 13:50:50.859518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.207 [2024-10-15 13:50:50.859525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:37.207 [2024-10-15 13:50:50.859534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:16:37.207 [2024-10-15 13:50:50.859544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.207 [2024-10-15 13:50:50.859633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.207 [2024-10-15 13:50:50.859640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:37.207 [2024-10-15 13:50:50.859650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:37.207 [2024-10-15 13:50:50.859657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.207 [2024-10-15 13:50:50.859784] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:37.207 [2024-10-15 13:50:50.859800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:37.207 [2024-10-15 13:50:50.859810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:37.207 [2024-10-15 13:50:50.859818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.207 [2024-10-15 13:50:50.859830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:37.207 [2024-10-15 13:50:50.859837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:37.207 [2024-10-15 13:50:50.859845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:37.207 [2024-10-15 13:50:50.859852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:37.207 [2024-10-15 13:50:50.859860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:37.207 [2024-10-15 13:50:50.859867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:37.207 [2024-10-15 13:50:50.859875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:37.207 [2024-10-15 13:50:50.859881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:37.207 [2024-10-15 13:50:50.859890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:37.207 [2024-10-15 13:50:50.859896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:37.207 [2024-10-15 13:50:50.859915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:37.207 [2024-10-15 13:50:50.859921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.207 [2024-10-15 13:50:50.859932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:37.207 [2024-10-15 13:50:50.859939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:37.207 [2024-10-15 13:50:50.859948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.207 [2024-10-15 13:50:50.859956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:37.207 [2024-10-15 13:50:50.859964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:37.207 [2024-10-15 13:50:50.859971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.207 [2024-10-15 13:50:50.859979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:37.207 [2024-10-15 13:50:50.859985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:37.207 [2024-10-15 13:50:50.859993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.207 [2024-10-15 13:50:50.859999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:37.207 [2024-10-15 13:50:50.860007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:37.207 [2024-10-15 13:50:50.860014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.207 [2024-10-15 13:50:50.860022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:37.207 [2024-10-15 13:50:50.860029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:37.207 [2024-10-15 13:50:50.860036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.207 [2024-10-15 13:50:50.860043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:37.207 [2024-10-15 13:50:50.860053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:37.207 [2024-10-15 13:50:50.860059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:37.208 [2024-10-15 13:50:50.860067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:37.208 [2024-10-15 13:50:50.860083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:37.208 [2024-10-15 13:50:50.860091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:37.208 [2024-10-15 13:50:50.860098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:37.208 [2024-10-15 13:50:50.860105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:37.208 [2024-10-15 13:50:50.860112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.208 [2024-10-15 13:50:50.860120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:37.208 [2024-10-15 13:50:50.860127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:37.208 [2024-10-15 13:50:50.860136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.208 [2024-10-15 13:50:50.860143] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:37.208 [2024-10-15 13:50:50.860152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:37.208 [2024-10-15 13:50:50.860159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:37.208 [2024-10-15 13:50:50.860167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.208 [2024-10-15 13:50:50.860174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:37.208 [2024-10-15 13:50:50.860184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:37.208 [2024-10-15 13:50:50.860191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:37.208 [2024-10-15 13:50:50.860200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:37.208 [2024-10-15 13:50:50.860212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:37.208 [2024-10-15 13:50:50.860233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:37.208 [2024-10-15 13:50:50.860244] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:37.208 [2024-10-15 13:50:50.860255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:37.208 [2024-10-15 13:50:50.860264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:37.208 [2024-10-15 13:50:50.860273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:37.208 [2024-10-15 13:50:50.860280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:37.208 [2024-10-15 13:50:50.860289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:37.208 [2024-10-15 13:50:50.860297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:37.208 [2024-10-15 13:50:50.860305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:37.208 [2024-10-15 13:50:50.860312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:37.208 [2024-10-15 13:50:50.860321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:37.208 [2024-10-15 13:50:50.860328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:37.208 [2024-10-15 13:50:50.860338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:37.208 [2024-10-15 13:50:50.860345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:37.208 [2024-10-15 13:50:50.860355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:37.208 [2024-10-15 13:50:50.860363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:37.208 [2024-10-15 13:50:50.860372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:37.208 [2024-10-15 13:50:50.860386] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:37.208 [2024-10-15 13:50:50.860396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:37.208 [2024-10-15 13:50:50.860404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:37.208 [2024-10-15 13:50:50.860412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:37.208 [2024-10-15 13:50:50.860419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:37.208 [2024-10-15 13:50:50.860428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:37.208 [2024-10-15 13:50:50.860435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.208 [2024-10-15 13:50:50.860444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:37.208 [2024-10-15 13:50:50.860453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:16:37.208 [2024-10-15 13:50:50.860462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.208 [2024-10-15 13:50:50.860512] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:37.208 [2024-10-15 13:50:50.860524] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:39.751 [2024-10-15 13:50:53.279401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.279456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:39.751 [2024-10-15 13:50:53.279469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2418.882 ms 00:16:39.751 [2024-10-15 13:50:53.279481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.751 [2024-10-15 13:50:53.301198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.301263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:39.751 [2024-10-15 13:50:53.301294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.538 ms 00:16:39.751 [2024-10-15 13:50:53.301302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.751 [2024-10-15 13:50:53.301424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.301438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:39.751 [2024-10-15 13:50:53.301446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:39.751 [2024-10-15 13:50:53.301456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.751 [2024-10-15 13:50:53.336229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.336285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:39.751 [2024-10-15 13:50:53.336301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.709 ms 00:16:39.751 [2024-10-15 13:50:53.336314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.751 [2024-10-15 13:50:53.336371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.336385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:39.751 [2024-10-15 13:50:53.336395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:39.751 [2024-10-15 13:50:53.336406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.751 [2024-10-15 13:50:53.336804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.336832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:39.751 [2024-10-15 13:50:53.336843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:16:39.751 [2024-10-15 13:50:53.336856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.751 [2024-10-15 13:50:53.337006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.337025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:39.751 [2024-10-15 13:50:53.337035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:16:39.751 [2024-10-15 13:50:53.337048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.751 [2024-10-15 13:50:53.352598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.352774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:39.751 [2024-10-15 13:50:53.352837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.523 ms 00:16:39.751 [2024-10-15 13:50:53.352864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.751 [2024-10-15 13:50:53.362595] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:39.751 [2024-10-15 13:50:53.375648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.375836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:39.751 [2024-10-15 13:50:53.375885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.658 ms 00:16:39.751 [2024-10-15 13:50:53.375911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.751 [2024-10-15 13:50:53.422375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.422610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:39.751 [2024-10-15 13:50:53.422666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.411 ms 00:16:39.751 [2024-10-15 13:50:53.422686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.751 [2024-10-15 13:50:53.422887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.422945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:39.751 [2024-10-15 13:50:53.422999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:16:39.751 [2024-10-15 13:50:53.423009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.751 [2024-10-15 13:50:53.442326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.442369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:39.751 [2024-10-15 13:50:53.442381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.260 ms 00:16:39.751 [2024-10-15 13:50:53.442391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.751 [2024-10-15 13:50:53.460846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.460887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:39.751 [2024-10-15 13:50:53.460899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.408 ms 00:16:39.751 [2024-10-15 13:50:53.460907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.751 [2024-10-15 13:50:53.461406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.751 [2024-10-15 13:50:53.461421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:39.751 [2024-10-15 13:50:53.461433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:16:39.752 [2024-10-15 13:50:53.461440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.752 [2024-10-15 13:50:53.516459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.752 [2024-10-15 13:50:53.516618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:39.752 [2024-10-15 13:50:53.516640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.982 ms 00:16:39.752 [2024-10-15 13:50:53.516647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.752 [2024-10-15 13:50:53.536617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.752 [2024-10-15 13:50:53.536659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:39.752 [2024-10-15 13:50:53.536672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.891 ms 00:16:39.752 [2024-10-15 13:50:53.536680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.011 [2024-10-15 13:50:53.556004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.011 [2024-10-15 13:50:53.556149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:40.011 [2024-10-15 13:50:53.556166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.283 ms 00:16:40.011 [2024-10-15 13:50:53.556172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.011 [2024-10-15 13:50:53.576307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.011 [2024-10-15 13:50:53.576348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:40.011 [2024-10-15 13:50:53.576360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.069 ms 00:16:40.011 [2024-10-15 13:50:53.576368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.011 [2024-10-15 13:50:53.576411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.011 [2024-10-15 13:50:53.576418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:40.011 [2024-10-15 13:50:53.576430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:40.011 [2024-10-15 13:50:53.576436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.011 [2024-10-15 13:50:53.576510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.011 [2024-10-15 13:50:53.576519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:40.011 [2024-10-15 13:50:53.576527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:40.011 [2024-10-15 13:50:53.576533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.011 [2024-10-15 13:50:53.577330] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2729.552 ms, result 0 00:16:40.011 { 00:16:40.011 "name": "ftl0", 00:16:40.011 "uuid": "179da044-db08-4010-acda-2ac6dc566573" 00:16:40.011 } 00:16:40.011 13:50:53 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:40.011 13:50:53 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:40.011 13:50:53 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:40.011 13:50:53 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:16:40.011 13:50:53 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:40.011 13:50:53 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:40.011 13:50:53 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:40.011 13:50:53 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:40.269 [ 00:16:40.269 { 00:16:40.269 "name": "ftl0", 00:16:40.269 "aliases": [ 00:16:40.269 "179da044-db08-4010-acda-2ac6dc566573" 00:16:40.269 ], 00:16:40.269 "product_name": "FTL disk", 00:16:40.269 "block_size": 4096, 00:16:40.269 "num_blocks": 20971520, 00:16:40.269 "uuid": "179da044-db08-4010-acda-2ac6dc566573", 00:16:40.269 "assigned_rate_limits": { 00:16:40.269 "rw_ios_per_sec": 0, 00:16:40.269 "rw_mbytes_per_sec": 0, 00:16:40.269 "r_mbytes_per_sec": 0, 00:16:40.269 "w_mbytes_per_sec": 0 00:16:40.269 }, 00:16:40.269 "claimed": false, 00:16:40.269 "zoned": false, 00:16:40.269 "supported_io_types": { 00:16:40.269 "read": true, 00:16:40.269 "write": true, 00:16:40.269 "unmap": true, 00:16:40.269 "flush": true, 00:16:40.269 "reset": false, 00:16:40.269 "nvme_admin": false, 00:16:40.269 "nvme_io": false, 00:16:40.269 "nvme_io_md": false, 00:16:40.269 "write_zeroes": true, 00:16:40.269 "zcopy": false, 00:16:40.269 "get_zone_info": false, 00:16:40.269 "zone_management": false, 00:16:40.269 "zone_append": false, 00:16:40.269 "compare": false, 00:16:40.269 "compare_and_write": false, 00:16:40.269 "abort": false, 00:16:40.269 "seek_hole": false, 00:16:40.269 "seek_data": false, 00:16:40.269 "copy": false, 00:16:40.269 "nvme_iov_md": false 00:16:40.269 }, 00:16:40.269 "driver_specific": { 00:16:40.269 "ftl": { 00:16:40.269 "base_bdev": "6169ff4e-ae07-4d03-bd51-dad6e745386d", 00:16:40.269 "cache": "nvc0n1p0" 00:16:40.269 } 00:16:40.269 } 00:16:40.269 } 00:16:40.269 ] 00:16:40.269 13:50:53 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:16:40.269 13:50:53 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:40.269 13:50:53 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:40.527 13:50:54 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:40.527 13:50:54 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:40.527 [2024-10-15 13:50:54.290182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.527 [2024-10-15 13:50:54.290242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:40.527 [2024-10-15 13:50:54.290255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:40.527 [2024-10-15 13:50:54.290264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.527 [2024-10-15 13:50:54.290289] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:40.527 [2024-10-15 13:50:54.292442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.527 [2024-10-15 13:50:54.292476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:40.527 [2024-10-15 13:50:54.292488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.134 ms 00:16:40.527 [2024-10-15 13:50:54.292495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.527 [2024-10-15 13:50:54.292897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.527 [2024-10-15 13:50:54.292909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:40.527 [2024-10-15 13:50:54.292918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:16:40.527 [2024-10-15 13:50:54.292924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.527 [2024-10-15 13:50:54.295430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.527 [2024-10-15 13:50:54.295449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:40.527 [2024-10-15 13:50:54.295458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.488 ms 00:16:40.527 [2024-10-15 13:50:54.295468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.527 [2024-10-15 13:50:54.300316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.527 [2024-10-15 13:50:54.300347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:40.527 [2024-10-15 13:50:54.300360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.823 ms 00:16:40.527 [2024-10-15 13:50:54.300368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.786 [2024-10-15 13:50:54.320071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.786 [2024-10-15 13:50:54.320115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:40.786 [2024-10-15 13:50:54.320128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.619 ms 00:16:40.786 [2024-10-15 13:50:54.320135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.786 [2024-10-15 13:50:54.333170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.786 [2024-10-15 13:50:54.333218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:40.786 [2024-10-15 13:50:54.333237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.970 ms 00:16:40.786 [2024-10-15 13:50:54.333246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.786 [2024-10-15 13:50:54.333424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.786 [2024-10-15 13:50:54.333441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:40.786 [2024-10-15 13:50:54.333450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:16:40.786 [2024-10-15 13:50:54.333456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.786 [2024-10-15 13:50:54.352300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.786 [2024-10-15 13:50:54.352344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:40.786 [2024-10-15 13:50:54.352357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.820 ms 00:16:40.786 [2024-10-15 13:50:54.352364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.786 [2024-10-15 13:50:54.370045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.786 [2024-10-15 13:50:54.370088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:40.786 [2024-10-15 13:50:54.370101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.632 ms 00:16:40.786 [2024-10-15 13:50:54.370108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.786 [2024-10-15 13:50:54.388032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.786 [2024-10-15 13:50:54.388073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:40.786 [2024-10-15 13:50:54.388086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.872 ms 00:16:40.786 [2024-10-15 13:50:54.388094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.786 [2024-10-15 13:50:54.405818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.786 [2024-10-15 13:50:54.405860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:40.786 [2024-10-15 13:50:54.405873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.625 ms 00:16:40.786 [2024-10-15 13:50:54.405880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.786 [2024-10-15 13:50:54.405922] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:40.786 [2024-10-15 13:50:54.405935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.405945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.405951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.405959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.405966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.405974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.405980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.405990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.405997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.406005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.406011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.406019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.406025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.406032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.406038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:40.786 [2024-10-15 13:50:54.406046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:40.787 [2024-10-15 13:50:54.406686] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:40.787 [2024-10-15 13:50:54.406694] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 179da044-db08-4010-acda-2ac6dc566573 00:16:40.787 [2024-10-15 13:50:54.406700] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:40.787 [2024-10-15 13:50:54.406710] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:40.787 [2024-10-15 13:50:54.406715] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:40.787 [2024-10-15 13:50:54.406723] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:40.787 [2024-10-15 13:50:54.406729] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:40.787 [2024-10-15 13:50:54.406736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:40.787 [2024-10-15 13:50:54.406745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:40.787 [2024-10-15 13:50:54.406751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:40.787 [2024-10-15 13:50:54.406758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:40.787 [2024-10-15 13:50:54.406766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.787 [2024-10-15 13:50:54.406773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:40.788 [2024-10-15 13:50:54.406781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:16:40.788 [2024-10-15 13:50:54.406788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.416629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.788 [2024-10-15 13:50:54.416667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:40.788 [2024-10-15 13:50:54.416678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.800 ms 00:16:40.788 [2024-10-15 13:50:54.416687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.416971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.788 [2024-10-15 13:50:54.416983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:40.788 [2024-10-15 13:50:54.416991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:16:40.788 [2024-10-15 13:50:54.416997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.452549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.788 [2024-10-15 13:50:54.452595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:40.788 [2024-10-15 13:50:54.452607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.788 [2024-10-15 13:50:54.452617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.452681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.788 [2024-10-15 13:50:54.452689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:40.788 [2024-10-15 13:50:54.452697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.788 [2024-10-15 13:50:54.452704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.452786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.788 [2024-10-15 13:50:54.452798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:40.788 [2024-10-15 13:50:54.452807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.788 [2024-10-15 13:50:54.452813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.452835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.788 [2024-10-15 13:50:54.452842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:40.788 [2024-10-15 13:50:54.452850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.788 [2024-10-15 13:50:54.452856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.519288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.788 [2024-10-15 13:50:54.519335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:40.788 [2024-10-15 13:50:54.519346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.788 [2024-10-15 13:50:54.519356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.570094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.788 [2024-10-15 13:50:54.570138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:40.788 [2024-10-15 13:50:54.570150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.788 [2024-10-15 13:50:54.570156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.570245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.788 [2024-10-15 13:50:54.570254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:40.788 [2024-10-15 13:50:54.570263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.788 [2024-10-15 13:50:54.570269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.570315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.788 [2024-10-15 13:50:54.570326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:40.788 [2024-10-15 13:50:54.570334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.788 [2024-10-15 13:50:54.570342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.570423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.788 [2024-10-15 13:50:54.570432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:40.788 [2024-10-15 13:50:54.570441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.788 [2024-10-15 13:50:54.570446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.570477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.788 [2024-10-15 13:50:54.570487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:40.788 [2024-10-15 13:50:54.570497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.788 [2024-10-15 13:50:54.570504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.570539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.788 [2024-10-15 13:50:54.570547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:40.788 [2024-10-15 13:50:54.570555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.788 [2024-10-15 13:50:54.570561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.570604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.788 [2024-10-15 13:50:54.570614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:40.788 [2024-10-15 13:50:54.570622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.788 [2024-10-15 13:50:54.570628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.788 [2024-10-15 13:50:54.570738] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 280.550 ms, result 0 00:16:41.051 true 00:16:41.051 13:50:54 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72430 00:16:41.051 13:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 72430 ']' 00:16:41.051 13:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 72430 00:16:41.051 13:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:16:41.051 13:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:41.052 13:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72430 00:16:41.052 killing process with pid 72430 00:16:41.052 13:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:41.052 13:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:41.052 13:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72430' 00:16:41.052 13:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 72430 00:16:41.052 13:50:54 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 72430 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:51.040 13:51:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:51.040 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:51.040 fio-3.35 00:16:51.040 Starting 1 thread 00:16:54.353 00:16:54.353 test: (groupid=0, jobs=1): err= 0: pid=72614: Tue Oct 15 13:51:07 2024 00:16:54.353 read: IOPS=1249, BW=83.0MiB/s (87.0MB/s)(255MiB/3068msec) 00:16:54.353 slat (nsec): min=3049, max=33833, avg=4090.34, stdev=2047.59 00:16:54.353 clat (usec): min=234, max=1012, avg=362.06, stdev=84.94 00:16:54.353 lat (usec): min=238, max=1016, avg=366.15, stdev=85.34 00:16:54.353 clat percentiles (usec): 00:16:54.354 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 302], 00:16:54.354 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 343], 00:16:54.354 | 70.00th=[ 371], 80.00th=[ 424], 90.00th=[ 494], 95.00th=[ 529], 00:16:54.354 | 99.00th=[ 676], 99.50th=[ 742], 99.90th=[ 930], 99.95th=[ 979], 00:16:54.354 | 99.99th=[ 1012] 00:16:54.354 write: IOPS=1258, BW=83.5MiB/s (87.6MB/s)(256MiB/3065msec); 0 zone resets 00:16:54.354 slat (nsec): min=13566, max=57833, avg=17357.61, stdev=3480.41 00:16:54.354 clat (usec): min=279, max=1240, avg=402.28, stdev=103.17 00:16:54.354 lat (usec): min=296, max=1256, avg=419.64, stdev=103.23 00:16:54.354 clat percentiles (usec): 00:16:54.354 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 318], 20.00th=[ 322], 00:16:54.354 | 30.00th=[ 326], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 388], 00:16:54.354 | 70.00th=[ 449], 80.00th=[ 478], 90.00th=[ 545], 95.00th=[ 586], 00:16:54.354 | 99.00th=[ 766], 99.50th=[ 832], 99.90th=[ 971], 99.95th=[ 1020], 00:16:54.354 | 99.99th=[ 1237] 00:16:54.354 bw ( KiB/s): min=83096, max=89624, per=99.78%, avg=85362.67, stdev=2799.97, samples=6 00:16:54.354 iops : min= 1222, max= 1318, avg=1255.33, stdev=41.18, samples=6 00:16:54.354 lat (usec) : 250=0.05%, 500=88.83%, 750=10.38%, 1000=0.70% 00:16:54.354 lat (msec) : 2=0.04% 00:16:54.354 cpu : usr=99.22%, sys=0.16%, ctx=12, majf=0, minf=1169 00:16:54.354 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.354 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.354 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.354 00:16:54.354 Run status group 0 (all jobs): 00:16:54.354 READ: bw=83.0MiB/s (87.0MB/s), 83.0MiB/s-83.0MiB/s (87.0MB/s-87.0MB/s), io=255MiB (267MB), run=3068-3068msec 00:16:54.355 WRITE: bw=83.5MiB/s (87.6MB/s), 83.5MiB/s-83.5MiB/s (87.6MB/s-87.6MB/s), io=256MiB (269MB), run=3065-3065msec 00:16:55.305 ----------------------------------------------------- 00:16:55.305 Suppressions used: 00:16:55.305 count bytes template 00:16:55.305 1 5 /usr/src/fio/parse.c 00:16:55.305 1 8 libtcmalloc_minimal.so 00:16:55.305 1 904 libcrypto.so 00:16:55.305 ----------------------------------------------------- 00:16:55.305 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:55.563 13:51:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:55.563 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:55.563 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:55.563 fio-3.35 00:16:55.563 Starting 2 threads 00:17:22.126 00:17:22.126 first_half: (groupid=0, jobs=1): err= 0: pid=72707: Tue Oct 15 13:51:33 2024 00:17:22.126 read: IOPS=2828, BW=11.0MiB/s (11.6MB/s)(255MiB/23069msec) 00:17:22.126 slat (nsec): min=3014, max=65841, avg=4037.84, stdev=999.13 00:17:22.126 clat (usec): min=571, max=278582, avg=34172.19, stdev=19884.06 00:17:22.126 lat (usec): min=575, max=278586, avg=34176.23, stdev=19884.20 00:17:22.126 clat percentiles (msec): 00:17:22.126 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 30], 20.00th=[ 30], 00:17:22.126 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:17:22.126 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 45], 00:17:22.126 | 99.00th=[ 142], 99.50th=[ 169], 99.90th=[ 236], 99.95th=[ 239], 00:17:22.126 | 99.99th=[ 271] 00:17:22.126 write: IOPS=3340, BW=13.0MiB/s (13.7MB/s)(256MiB/19616msec); 0 zone resets 00:17:22.126 slat (usec): min=3, max=419, avg= 6.06, stdev= 3.54 00:17:22.126 clat (usec): min=367, max=77070, avg=11001.71, stdev=17190.45 00:17:22.126 lat (usec): min=382, max=77075, avg=11007.78, stdev=17190.41 00:17:22.126 clat percentiles (usec): 00:17:22.126 | 1.00th=[ 652], 5.00th=[ 791], 10.00th=[ 938], 20.00th=[ 1254], 00:17:22.126 | 30.00th=[ 2868], 40.00th=[ 4228], 50.00th=[ 5276], 60.00th=[ 5997], 00:17:22.126 | 70.00th=[ 7701], 80.00th=[11207], 90.00th=[32375], 95.00th=[60556], 00:17:22.126 | 99.00th=[68682], 99.50th=[71828], 99.90th=[74974], 99.95th=[76022], 00:17:22.126 | 99.99th=[76022] 00:17:22.126 bw ( KiB/s): min= 920, max=42048, per=89.16%, avg=23831.27, stdev=12420.35, samples=22 00:17:22.126 iops : min= 230, max=10512, avg=5957.82, stdev=3105.09, samples=22 00:17:22.126 lat (usec) : 500=0.03%, 750=1.77%, 1000=4.40% 00:17:22.126 lat (msec) : 2=6.24%, 4=7.31%, 10=20.21%, 20=6.45%, 50=46.90% 00:17:22.126 lat (msec) : 100=5.57%, 250=1.10%, 500=0.02% 00:17:22.126 cpu : usr=99.26%, sys=0.13%, ctx=60, majf=0, minf=5567 00:17:22.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:22.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.126 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:22.126 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:22.126 second_half: (groupid=0, jobs=1): err= 0: pid=72708: Tue Oct 15 13:51:33 2024 00:17:22.126 read: IOPS=2843, BW=11.1MiB/s (11.6MB/s)(254MiB/22907msec) 00:17:22.126 slat (nsec): min=3021, max=39939, avg=4697.05, stdev=1282.68 00:17:22.126 clat (usec): min=498, max=283245, avg=34762.02, stdev=18678.76 00:17:22.126 lat (usec): min=503, max=283249, avg=34766.72, stdev=18678.89 00:17:22.126 clat percentiles (msec): 00:17:22.126 | 1.00th=[ 4], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 30], 00:17:22.126 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:17:22.126 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 40], 95.00th=[ 51], 00:17:22.126 | 99.00th=[ 138], 99.50th=[ 161], 99.90th=[ 218], 99.95th=[ 222], 00:17:22.126 | 99.99th=[ 279] 00:17:22.126 write: IOPS=4018, BW=15.7MiB/s (16.5MB/s)(256MiB/16308msec); 0 zone resets 00:17:22.126 slat (usec): min=3, max=861, avg= 6.03, stdev= 4.23 00:17:22.126 clat (usec): min=354, max=77259, avg=10142.48, stdev=16841.18 00:17:22.126 lat (usec): min=365, max=77264, avg=10148.51, stdev=16841.14 00:17:22.126 clat percentiles (usec): 00:17:22.126 | 1.00th=[ 685], 5.00th=[ 824], 10.00th=[ 938], 20.00th=[ 1106], 00:17:22.126 | 30.00th=[ 1401], 40.00th=[ 3261], 50.00th=[ 4817], 60.00th=[ 5997], 00:17:22.126 | 70.00th=[ 7963], 80.00th=[10683], 90.00th=[15795], 95.00th=[60031], 00:17:22.126 | 99.00th=[68682], 99.50th=[71828], 99.90th=[74974], 99.95th=[76022], 00:17:22.126 | 99.99th=[77071] 00:17:22.126 bw ( KiB/s): min= 1016, max=47616, per=98.08%, avg=26214.40, stdev=14978.04, samples=20 00:17:22.126 iops : min= 254, max=11904, avg=6553.60, stdev=3744.51, samples=20 00:17:22.126 lat (usec) : 500=0.01%, 750=1.27%, 1000=5.77% 00:17:22.126 lat (msec) : 2=10.26%, 4=5.28%, 10=17.04%, 20=6.89%, 50=46.51% 00:17:22.126 lat (msec) : 100=5.95%, 250=1.02%, 500=0.01% 00:17:22.126 cpu : usr=99.34%, sys=0.12%, ctx=55, majf=0, minf=5560 00:17:22.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:22.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.127 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:22.127 issued rwts: total=65143,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.127 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:22.127 00:17:22.127 Run status group 0 (all jobs): 00:17:22.127 READ: bw=22.1MiB/s (23.1MB/s), 11.0MiB/s-11.1MiB/s (11.6MB/s-11.6MB/s), io=509MiB (534MB), run=22907-23069msec 00:17:22.127 WRITE: bw=26.1MiB/s (27.4MB/s), 13.0MiB/s-15.7MiB/s (13.7MB/s-16.5MB/s), io=512MiB (537MB), run=16308-19616msec 00:17:22.127 ----------------------------------------------------- 00:17:22.127 Suppressions used: 00:17:22.127 count bytes template 00:17:22.127 2 10 /usr/src/fio/parse.c 00:17:22.127 3 288 /usr/src/fio/iolog.c 00:17:22.127 1 8 libtcmalloc_minimal.so 00:17:22.127 1 904 libcrypto.so 00:17:22.127 ----------------------------------------------------- 00:17:22.127 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:22.127 13:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:22.387 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:22.387 fio-3.35 00:17:22.387 Starting 1 thread 00:17:37.268 00:17:37.268 test: (groupid=0, jobs=1): err= 0: pid=73023: Tue Oct 15 13:51:49 2024 00:17:37.268 read: IOPS=8030, BW=31.4MiB/s (32.9MB/s)(255MiB/8119msec) 00:17:37.268 slat (nsec): min=3032, max=30605, avg=4623.56, stdev=1092.93 00:17:37.268 clat (usec): min=665, max=32422, avg=15929.02, stdev=2065.33 00:17:37.268 lat (usec): min=670, max=32426, avg=15933.64, stdev=2065.39 00:17:37.268 clat percentiles (usec): 00:17:37.268 | 1.00th=[13173], 5.00th=[13566], 10.00th=[13960], 20.00th=[15008], 00:17:37.268 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15533], 60.00th=[15795], 00:17:37.268 | 70.00th=[16057], 80.00th=[16319], 90.00th=[17957], 95.00th=[20055], 00:17:37.268 | 99.00th=[24773], 99.50th=[26346], 99.90th=[29754], 99.95th=[30278], 00:17:37.268 | 99.99th=[31589] 00:17:37.268 write: IOPS=15.4k, BW=60.3MiB/s (63.2MB/s)(256MiB/4246msec); 0 zone resets 00:17:37.268 slat (usec): min=4, max=191, avg= 7.76, stdev= 2.92 00:17:37.268 clat (usec): min=476, max=49614, avg=8248.94, stdev=9949.11 00:17:37.268 lat (usec): min=483, max=49622, avg=8256.70, stdev=9949.10 00:17:37.268 clat percentiles (usec): 00:17:37.268 | 1.00th=[ 635], 5.00th=[ 742], 10.00th=[ 832], 20.00th=[ 971], 00:17:37.268 | 30.00th=[ 1123], 40.00th=[ 1614], 50.00th=[ 5735], 60.00th=[ 6587], 00:17:37.268 | 70.00th=[ 7832], 80.00th=[10028], 90.00th=[27395], 95.00th=[31589], 00:17:37.268 | 99.00th=[36963], 99.50th=[38536], 99.90th=[42206], 99.95th=[42730], 00:17:37.268 | 99.99th=[48497] 00:17:37.268 bw ( KiB/s): min=25712, max=82352, per=94.36%, avg=58254.22, stdev=15359.79, samples=9 00:17:37.268 iops : min= 6428, max=20588, avg=14563.56, stdev=3839.95, samples=9 00:17:37.268 lat (usec) : 500=0.01%, 750=2.75%, 1000=8.40% 00:17:37.268 lat (msec) : 2=9.40%, 4=0.61%, 10=18.96%, 20=49.51%, 50=10.38% 00:17:37.268 cpu : usr=99.04%, sys=0.21%, ctx=20, majf=0, minf=5565 00:17:37.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:37.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.268 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:37.268 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:37.268 00:17:37.268 Run status group 0 (all jobs): 00:17:37.268 READ: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=255MiB (267MB), run=8119-8119msec 00:17:37.268 WRITE: bw=60.3MiB/s (63.2MB/s), 60.3MiB/s-60.3MiB/s (63.2MB/s-63.2MB/s), io=256MiB (268MB), run=4246-4246msec 00:17:37.268 ----------------------------------------------------- 00:17:37.268 Suppressions used: 00:17:37.268 count bytes template 00:17:37.268 1 5 /usr/src/fio/parse.c 00:17:37.268 2 192 /usr/src/fio/iolog.c 00:17:37.268 1 8 libtcmalloc_minimal.so 00:17:37.268 1 904 libcrypto.so 00:17:37.268 ----------------------------------------------------- 00:17:37.268 00:17:37.268 13:51:50 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:37.268 13:51:50 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:37.268 13:51:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:37.268 13:51:51 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:37.268 Remove shared memory files 00:17:37.268 13:51:51 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:37.268 13:51:51 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:37.268 13:51:51 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:37.268 13:51:51 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:37.268 13:51:51 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57128 /dev/shm/spdk_tgt_trace.pid71352 00:17:37.268 13:51:51 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:37.268 13:51:51 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:37.268 ************************************ 00:17:37.268 END TEST ftl_fio_basic 00:17:37.268 ************************************ 00:17:37.268 00:17:37.268 real 1m3.969s 00:17:37.268 user 2m22.654s 00:17:37.268 sys 0m2.687s 00:17:37.268 13:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.268 13:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:37.268 13:51:51 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:37.268 13:51:51 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:37.268 13:51:51 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:37.268 13:51:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:37.526 ************************************ 00:17:37.526 START TEST ftl_bdevperf 00:17:37.526 ************************************ 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:37.526 * Looking for test storage... 00:17:37.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:37.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.526 --rc genhtml_branch_coverage=1 00:17:37.526 --rc genhtml_function_coverage=1 00:17:37.526 --rc genhtml_legend=1 00:17:37.526 --rc geninfo_all_blocks=1 00:17:37.526 --rc geninfo_unexecuted_blocks=1 00:17:37.526 00:17:37.526 ' 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:37.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.526 --rc genhtml_branch_coverage=1 00:17:37.526 --rc genhtml_function_coverage=1 00:17:37.526 --rc genhtml_legend=1 00:17:37.526 --rc geninfo_all_blocks=1 00:17:37.526 --rc geninfo_unexecuted_blocks=1 00:17:37.526 00:17:37.526 ' 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:37.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.526 --rc genhtml_branch_coverage=1 00:17:37.526 --rc genhtml_function_coverage=1 00:17:37.526 --rc genhtml_legend=1 00:17:37.526 --rc geninfo_all_blocks=1 00:17:37.526 --rc geninfo_unexecuted_blocks=1 00:17:37.526 00:17:37.526 ' 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:37.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.526 --rc genhtml_branch_coverage=1 00:17:37.526 --rc genhtml_function_coverage=1 00:17:37.526 --rc genhtml_legend=1 00:17:37.526 --rc geninfo_all_blocks=1 00:17:37.526 --rc geninfo_unexecuted_blocks=1 00:17:37.526 00:17:37.526 ' 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.526 13:51:51 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73251 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73251 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 73251 ']' 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.527 13:51:51 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:37.527 [2024-10-15 13:51:51.280484] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:17:37.527 [2024-10-15 13:51:51.280759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73251 ] 00:17:37.824 [2024-10-15 13:51:51.430502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.824 [2024-10-15 13:51:51.529159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.413 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.413 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:17:38.413 13:51:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:38.413 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:38.413 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:38.413 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:38.413 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:38.413 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:38.980 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:38.980 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:38.980 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:38.980 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:38.980 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:38.980 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:38.980 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:38.980 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:38.980 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:38.980 { 00:17:38.980 "name": "nvme0n1", 00:17:38.980 "aliases": [ 00:17:38.980 "c161ef18-7bb8-4022-b7f3-cc2bcb6a83b6" 00:17:38.980 ], 00:17:38.980 "product_name": "NVMe disk", 00:17:38.980 "block_size": 4096, 00:17:38.980 "num_blocks": 1310720, 00:17:38.980 "uuid": "c161ef18-7bb8-4022-b7f3-cc2bcb6a83b6", 00:17:38.980 "numa_id": -1, 00:17:38.980 "assigned_rate_limits": { 00:17:38.980 "rw_ios_per_sec": 0, 00:17:38.980 "rw_mbytes_per_sec": 0, 00:17:38.980 "r_mbytes_per_sec": 0, 00:17:38.980 "w_mbytes_per_sec": 0 00:17:38.980 }, 00:17:38.980 "claimed": true, 00:17:38.980 "claim_type": "read_many_write_one", 00:17:38.980 "zoned": false, 00:17:38.980 "supported_io_types": { 00:17:38.980 "read": true, 00:17:38.980 "write": true, 00:17:38.980 "unmap": true, 00:17:38.980 "flush": true, 00:17:38.980 "reset": true, 00:17:38.980 "nvme_admin": true, 00:17:38.980 "nvme_io": true, 00:17:38.980 "nvme_io_md": false, 00:17:38.980 "write_zeroes": true, 00:17:38.980 "zcopy": false, 00:17:38.980 "get_zone_info": false, 00:17:38.980 "zone_management": false, 00:17:38.980 "zone_append": false, 00:17:38.980 "compare": true, 00:17:38.980 "compare_and_write": false, 00:17:38.980 "abort": true, 00:17:38.980 "seek_hole": false, 00:17:38.980 "seek_data": false, 00:17:38.980 "copy": true, 00:17:38.980 "nvme_iov_md": false 00:17:38.980 }, 00:17:38.980 "driver_specific": { 00:17:38.980 "nvme": [ 00:17:38.980 { 00:17:38.980 "pci_address": "0000:00:11.0", 00:17:38.980 "trid": { 00:17:38.981 "trtype": "PCIe", 00:17:38.981 "traddr": "0000:00:11.0" 00:17:38.981 }, 00:17:38.981 "ctrlr_data": { 00:17:38.981 "cntlid": 0, 00:17:38.981 "vendor_id": "0x1b36", 00:17:38.981 "model_number": "QEMU NVMe Ctrl", 00:17:38.981 "serial_number": "12341", 00:17:38.981 "firmware_revision": "8.0.0", 00:17:38.981 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:38.981 "oacs": { 00:17:38.981 "security": 0, 00:17:38.981 "format": 1, 00:17:38.981 "firmware": 0, 00:17:38.981 "ns_manage": 1 00:17:38.981 }, 00:17:38.981 "multi_ctrlr": false, 00:17:38.981 "ana_reporting": false 00:17:38.981 }, 00:17:38.981 "vs": { 00:17:38.981 "nvme_version": "1.4" 00:17:38.981 }, 00:17:38.981 "ns_data": { 00:17:38.981 "id": 1, 00:17:38.981 "can_share": false 00:17:38.981 } 00:17:38.981 } 00:17:38.981 ], 00:17:38.981 "mp_policy": "active_passive" 00:17:38.981 } 00:17:38.981 } 00:17:38.981 ]' 00:17:38.981 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:38.981 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:38.981 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:38.981 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:38.981 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:38.981 13:51:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:17:38.981 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:38.981 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:38.981 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:38.981 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:38.981 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:39.240 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=1aac743c-243c-467d-bfbf-6f81abbcfbac 00:17:39.240 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:39.240 13:51:52 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1aac743c-243c-467d-bfbf-6f81abbcfbac 00:17:39.498 13:51:53 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:39.756 13:51:53 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=6bc799bd-f27f-442e-b51b-e3cea4af014d 00:17:39.756 13:51:53 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6bc799bd-f27f-442e-b51b-e3cea4af014d 00:17:40.015 13:51:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=2290a4e6-1e49-4e86-b42a-494d229b015d 00:17:40.015 13:51:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2290a4e6-1e49-4e86-b42a-494d229b015d 00:17:40.015 13:51:53 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:40.015 13:51:53 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:40.015 13:51:53 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=2290a4e6-1e49-4e86-b42a-494d229b015d 00:17:40.015 13:51:53 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:40.015 13:51:53 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 2290a4e6-1e49-4e86-b42a-494d229b015d 00:17:40.015 13:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=2290a4e6-1e49-4e86-b42a-494d229b015d 00:17:40.015 13:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:40.015 13:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:40.015 13:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:40.015 13:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2290a4e6-1e49-4e86-b42a-494d229b015d 00:17:40.273 13:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:40.273 { 00:17:40.273 "name": "2290a4e6-1e49-4e86-b42a-494d229b015d", 00:17:40.273 "aliases": [ 00:17:40.273 "lvs/nvme0n1p0" 00:17:40.273 ], 00:17:40.273 "product_name": "Logical Volume", 00:17:40.273 "block_size": 4096, 00:17:40.273 "num_blocks": 26476544, 00:17:40.273 "uuid": "2290a4e6-1e49-4e86-b42a-494d229b015d", 00:17:40.273 "assigned_rate_limits": { 00:17:40.273 "rw_ios_per_sec": 0, 00:17:40.273 "rw_mbytes_per_sec": 0, 00:17:40.273 "r_mbytes_per_sec": 0, 00:17:40.273 "w_mbytes_per_sec": 0 00:17:40.273 }, 00:17:40.273 "claimed": false, 00:17:40.273 "zoned": false, 00:17:40.273 "supported_io_types": { 00:17:40.273 "read": true, 00:17:40.273 "write": true, 00:17:40.273 "unmap": true, 00:17:40.273 "flush": false, 00:17:40.273 "reset": true, 00:17:40.273 "nvme_admin": false, 00:17:40.273 "nvme_io": false, 00:17:40.273 "nvme_io_md": false, 00:17:40.273 "write_zeroes": true, 00:17:40.273 "zcopy": false, 00:17:40.273 "get_zone_info": false, 00:17:40.273 "zone_management": false, 00:17:40.273 "zone_append": false, 00:17:40.273 "compare": false, 00:17:40.273 "compare_and_write": false, 00:17:40.273 "abort": false, 00:17:40.273 "seek_hole": true, 00:17:40.273 "seek_data": true, 00:17:40.273 "copy": false, 00:17:40.273 "nvme_iov_md": false 00:17:40.273 }, 00:17:40.273 "driver_specific": { 00:17:40.273 "lvol": { 00:17:40.273 "lvol_store_uuid": "6bc799bd-f27f-442e-b51b-e3cea4af014d", 00:17:40.273 "base_bdev": "nvme0n1", 00:17:40.273 "thin_provision": true, 00:17:40.273 "num_allocated_clusters": 0, 00:17:40.273 "snapshot": false, 00:17:40.273 "clone": false, 00:17:40.273 "esnap_clone": false 00:17:40.273 } 00:17:40.273 } 00:17:40.273 } 00:17:40.273 ]' 00:17:40.273 13:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:40.273 13:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:40.273 13:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:40.273 13:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:40.273 13:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:40.273 13:51:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:40.273 13:51:53 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:40.273 13:51:53 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:40.273 13:51:53 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:40.531 13:51:54 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:40.531 13:51:54 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:40.531 13:51:54 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 2290a4e6-1e49-4e86-b42a-494d229b015d 00:17:40.531 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=2290a4e6-1e49-4e86-b42a-494d229b015d 00:17:40.531 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:40.531 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:40.531 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:40.531 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2290a4e6-1e49-4e86-b42a-494d229b015d 00:17:40.789 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:40.789 { 00:17:40.789 "name": "2290a4e6-1e49-4e86-b42a-494d229b015d", 00:17:40.789 "aliases": [ 00:17:40.789 "lvs/nvme0n1p0" 00:17:40.789 ], 00:17:40.789 "product_name": "Logical Volume", 00:17:40.789 "block_size": 4096, 00:17:40.789 "num_blocks": 26476544, 00:17:40.789 "uuid": "2290a4e6-1e49-4e86-b42a-494d229b015d", 00:17:40.789 "assigned_rate_limits": { 00:17:40.789 "rw_ios_per_sec": 0, 00:17:40.789 "rw_mbytes_per_sec": 0, 00:17:40.789 "r_mbytes_per_sec": 0, 00:17:40.789 "w_mbytes_per_sec": 0 00:17:40.789 }, 00:17:40.789 "claimed": false, 00:17:40.789 "zoned": false, 00:17:40.789 "supported_io_types": { 00:17:40.789 "read": true, 00:17:40.789 "write": true, 00:17:40.789 "unmap": true, 00:17:40.789 "flush": false, 00:17:40.789 "reset": true, 00:17:40.789 "nvme_admin": false, 00:17:40.789 "nvme_io": false, 00:17:40.789 "nvme_io_md": false, 00:17:40.789 "write_zeroes": true, 00:17:40.789 "zcopy": false, 00:17:40.789 "get_zone_info": false, 00:17:40.789 "zone_management": false, 00:17:40.789 "zone_append": false, 00:17:40.789 "compare": false, 00:17:40.789 "compare_and_write": false, 00:17:40.789 "abort": false, 00:17:40.789 "seek_hole": true, 00:17:40.789 "seek_data": true, 00:17:40.789 "copy": false, 00:17:40.789 "nvme_iov_md": false 00:17:40.789 }, 00:17:40.789 "driver_specific": { 00:17:40.789 "lvol": { 00:17:40.789 "lvol_store_uuid": "6bc799bd-f27f-442e-b51b-e3cea4af014d", 00:17:40.789 "base_bdev": "nvme0n1", 00:17:40.789 "thin_provision": true, 00:17:40.789 "num_allocated_clusters": 0, 00:17:40.789 "snapshot": false, 00:17:40.789 "clone": false, 00:17:40.789 "esnap_clone": false 00:17:40.789 } 00:17:40.789 } 00:17:40.789 } 00:17:40.789 ]' 00:17:40.789 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:40.789 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:40.789 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:40.789 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:40.789 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:40.789 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:40.789 13:51:54 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:40.789 13:51:54 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:41.047 13:51:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:17:41.047 13:51:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 2290a4e6-1e49-4e86-b42a-494d229b015d 00:17:41.047 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=2290a4e6-1e49-4e86-b42a-494d229b015d 00:17:41.047 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:41.047 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:41.047 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:41.047 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2290a4e6-1e49-4e86-b42a-494d229b015d 00:17:41.305 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:41.305 { 00:17:41.305 "name": "2290a4e6-1e49-4e86-b42a-494d229b015d", 00:17:41.305 "aliases": [ 00:17:41.305 "lvs/nvme0n1p0" 00:17:41.305 ], 00:17:41.305 "product_name": "Logical Volume", 00:17:41.305 "block_size": 4096, 00:17:41.305 "num_blocks": 26476544, 00:17:41.305 "uuid": "2290a4e6-1e49-4e86-b42a-494d229b015d", 00:17:41.305 "assigned_rate_limits": { 00:17:41.305 "rw_ios_per_sec": 0, 00:17:41.305 "rw_mbytes_per_sec": 0, 00:17:41.305 "r_mbytes_per_sec": 0, 00:17:41.305 "w_mbytes_per_sec": 0 00:17:41.305 }, 00:17:41.305 "claimed": false, 00:17:41.305 "zoned": false, 00:17:41.305 "supported_io_types": { 00:17:41.305 "read": true, 00:17:41.305 "write": true, 00:17:41.305 "unmap": true, 00:17:41.305 "flush": false, 00:17:41.305 "reset": true, 00:17:41.305 "nvme_admin": false, 00:17:41.305 "nvme_io": false, 00:17:41.305 "nvme_io_md": false, 00:17:41.305 "write_zeroes": true, 00:17:41.305 "zcopy": false, 00:17:41.305 "get_zone_info": false, 00:17:41.305 "zone_management": false, 00:17:41.305 "zone_append": false, 00:17:41.305 "compare": false, 00:17:41.305 "compare_and_write": false, 00:17:41.305 "abort": false, 00:17:41.305 "seek_hole": true, 00:17:41.305 "seek_data": true, 00:17:41.305 "copy": false, 00:17:41.305 "nvme_iov_md": false 00:17:41.305 }, 00:17:41.305 "driver_specific": { 00:17:41.305 "lvol": { 00:17:41.305 "lvol_store_uuid": "6bc799bd-f27f-442e-b51b-e3cea4af014d", 00:17:41.305 "base_bdev": "nvme0n1", 00:17:41.305 "thin_provision": true, 00:17:41.305 "num_allocated_clusters": 0, 00:17:41.305 "snapshot": false, 00:17:41.305 "clone": false, 00:17:41.305 "esnap_clone": false 00:17:41.305 } 00:17:41.305 } 00:17:41.305 } 00:17:41.305 ]' 00:17:41.305 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:41.305 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:41.305 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:41.305 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:41.305 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:41.305 13:51:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:41.305 13:51:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:17:41.305 13:51:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2290a4e6-1e49-4e86-b42a-494d229b015d -c nvc0n1p0 --l2p_dram_limit 20 00:17:41.563 [2024-10-15 13:51:55.093989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.563 [2024-10-15 13:51:55.094161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:41.563 [2024-10-15 13:51:55.094179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:41.563 [2024-10-15 13:51:55.094190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.563 [2024-10-15 13:51:55.094258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.563 [2024-10-15 13:51:55.094269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.563 [2024-10-15 13:51:55.094276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:41.563 [2024-10-15 13:51:55.094286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.563 [2024-10-15 13:51:55.094300] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:41.563 [2024-10-15 13:51:55.094876] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:41.563 [2024-10-15 13:51:55.094888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.563 [2024-10-15 13:51:55.094897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.563 [2024-10-15 13:51:55.094904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:17:41.563 [2024-10-15 13:51:55.094911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.563 [2024-10-15 13:51:55.094961] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 774d9371-fadf-4590-ab71-e29a26c3b56a 00:17:41.563 [2024-10-15 13:51:55.095959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.563 [2024-10-15 13:51:55.095978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:41.563 [2024-10-15 13:51:55.095987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:41.563 [2024-10-15 13:51:55.095997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.563 [2024-10-15 13:51:55.100998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.563 [2024-10-15 13:51:55.101094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.563 [2024-10-15 13:51:55.101162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.968 ms 00:17:41.563 [2024-10-15 13:51:55.101181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.563 [2024-10-15 13:51:55.101269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.563 [2024-10-15 13:51:55.101418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.563 [2024-10-15 13:51:55.101447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:41.563 [2024-10-15 13:51:55.101463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.563 [2024-10-15 13:51:55.101516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.563 [2024-10-15 13:51:55.101768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:41.563 [2024-10-15 13:51:55.101920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:41.563 [2024-10-15 13:51:55.101941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.563 [2024-10-15 13:51:55.101969] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.563 [2024-10-15 13:51:55.104943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.563 [2024-10-15 13:51:55.105040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.563 [2024-10-15 13:51:55.105121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.981 ms 00:17:41.563 [2024-10-15 13:51:55.105142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.563 [2024-10-15 13:51:55.105176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.563 [2024-10-15 13:51:55.105196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:41.563 [2024-10-15 13:51:55.105260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:41.563 [2024-10-15 13:51:55.105283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.563 [2024-10-15 13:51:55.105312] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:41.563 [2024-10-15 13:51:55.105434] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:41.563 [2024-10-15 13:51:55.105463] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:41.563 [2024-10-15 13:51:55.105525] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:41.563 [2024-10-15 13:51:55.105553] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:41.563 [2024-10-15 13:51:55.105578] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:41.563 [2024-10-15 13:51:55.105629] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:41.563 [2024-10-15 13:51:55.105648] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:41.563 [2024-10-15 13:51:55.105664] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:41.563 [2024-10-15 13:51:55.105680] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:41.563 [2024-10-15 13:51:55.105716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.563 [2024-10-15 13:51:55.105735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:41.563 [2024-10-15 13:51:55.105752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:17:41.564 [2024-10-15 13:51:55.105833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.564 [2024-10-15 13:51:55.105922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.564 [2024-10-15 13:51:55.105943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:41.564 [2024-10-15 13:51:55.105990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:41.564 [2024-10-15 13:51:55.106012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.564 [2024-10-15 13:51:55.106094] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:41.564 [2024-10-15 13:51:55.106146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:41.564 [2024-10-15 13:51:55.106165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.564 [2024-10-15 13:51:55.106182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.564 [2024-10-15 13:51:55.106228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:41.564 [2024-10-15 13:51:55.106248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:41.564 [2024-10-15 13:51:55.106264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:41.564 [2024-10-15 13:51:55.106305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:41.564 [2024-10-15 13:51:55.106323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:41.564 [2024-10-15 13:51:55.106339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.564 [2024-10-15 13:51:55.106373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:41.564 [2024-10-15 13:51:55.106392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:41.564 [2024-10-15 13:51:55.106407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.564 [2024-10-15 13:51:55.106428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:41.564 [2024-10-15 13:51:55.106445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:41.564 [2024-10-15 13:51:55.106463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.564 [2024-10-15 13:51:55.106577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:41.564 [2024-10-15 13:51:55.106598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:41.564 [2024-10-15 13:51:55.106613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.564 [2024-10-15 13:51:55.106630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:41.564 [2024-10-15 13:51:55.106645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:41.564 [2024-10-15 13:51:55.106661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.564 [2024-10-15 13:51:55.106715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:41.564 [2024-10-15 13:51:55.106736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:41.564 [2024-10-15 13:51:55.106751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.564 [2024-10-15 13:51:55.106767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:41.564 [2024-10-15 13:51:55.106781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:41.564 [2024-10-15 13:51:55.106797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.564 [2024-10-15 13:51:55.106812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:41.564 [2024-10-15 13:51:55.106875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:41.564 [2024-10-15 13:51:55.106890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.564 [2024-10-15 13:51:55.106907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:41.564 [2024-10-15 13:51:55.106922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:41.564 [2024-10-15 13:51:55.106937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.564 [2024-10-15 13:51:55.106986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:41.564 [2024-10-15 13:51:55.107007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:41.564 [2024-10-15 13:51:55.107021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.564 [2024-10-15 13:51:55.107038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:41.564 [2024-10-15 13:51:55.107054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:41.564 [2024-10-15 13:51:55.107100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.564 [2024-10-15 13:51:55.107118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:41.564 [2024-10-15 13:51:55.107134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:41.564 [2024-10-15 13:51:55.107148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.564 [2024-10-15 13:51:55.107164] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:41.564 [2024-10-15 13:51:55.107207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:41.564 [2024-10-15 13:51:55.107262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.564 [2024-10-15 13:51:55.107272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.564 [2024-10-15 13:51:55.107281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:41.564 [2024-10-15 13:51:55.107287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:41.564 [2024-10-15 13:51:55.107294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:41.564 [2024-10-15 13:51:55.107300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:41.564 [2024-10-15 13:51:55.107307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:41.564 [2024-10-15 13:51:55.107313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:41.564 [2024-10-15 13:51:55.107323] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:41.564 [2024-10-15 13:51:55.107331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.564 [2024-10-15 13:51:55.107339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:41.564 [2024-10-15 13:51:55.107345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:41.564 [2024-10-15 13:51:55.107354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:41.564 [2024-10-15 13:51:55.107360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:41.564 [2024-10-15 13:51:55.107367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:41.564 [2024-10-15 13:51:55.107373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:41.564 [2024-10-15 13:51:55.107380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:41.564 [2024-10-15 13:51:55.107386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:41.564 [2024-10-15 13:51:55.107399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:41.564 [2024-10-15 13:51:55.107405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:41.564 [2024-10-15 13:51:55.107412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:41.564 [2024-10-15 13:51:55.107417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:41.564 [2024-10-15 13:51:55.107426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:41.564 [2024-10-15 13:51:55.107432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:41.564 [2024-10-15 13:51:55.107440] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:41.564 [2024-10-15 13:51:55.107446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.564 [2024-10-15 13:51:55.107454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:41.564 [2024-10-15 13:51:55.107460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:41.564 [2024-10-15 13:51:55.107467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:41.564 [2024-10-15 13:51:55.107473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:41.564 [2024-10-15 13:51:55.107481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.564 [2024-10-15 13:51:55.107487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:41.564 [2024-10-15 13:51:55.107494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:17:41.564 [2024-10-15 13:51:55.107501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.564 [2024-10-15 13:51:55.107533] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:41.564 [2024-10-15 13:51:55.107541] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:44.122 [2024-10-15 13:51:57.791087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.122 [2024-10-15 13:51:57.791326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:44.122 [2024-10-15 13:51:57.791497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2683.536 ms 00:17:44.122 [2024-10-15 13:51:57.791526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.122 [2024-10-15 13:51:57.817488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.122 [2024-10-15 13:51:57.817670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.122 [2024-10-15 13:51:57.817741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.660 ms 00:17:44.122 [2024-10-15 13:51:57.817765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.122 [2024-10-15 13:51:57.817918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.122 [2024-10-15 13:51:57.817999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.122 [2024-10-15 13:51:57.818029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:44.122 [2024-10-15 13:51:57.818050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.122 [2024-10-15 13:51:57.859090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.122 [2024-10-15 13:51:57.859269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.122 [2024-10-15 13:51:57.859437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.951 ms 00:17:44.122 [2024-10-15 13:51:57.859471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.122 [2024-10-15 13:51:57.859521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.122 [2024-10-15 13:51:57.859547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.122 [2024-10-15 13:51:57.859581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:44.122 [2024-10-15 13:51:57.859663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.122 [2024-10-15 13:51:57.860068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.122 [2024-10-15 13:51:57.860161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.122 [2024-10-15 13:51:57.860264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:17:44.122 [2024-10-15 13:51:57.860291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.122 [2024-10-15 13:51:57.860423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.122 [2024-10-15 13:51:57.860535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.122 [2024-10-15 13:51:57.860573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:44.122 [2024-10-15 13:51:57.860601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.122 [2024-10-15 13:51:57.873611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.122 [2024-10-15 13:51:57.873715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.122 [2024-10-15 13:51:57.873765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.979 ms 00:17:44.122 [2024-10-15 13:51:57.873787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.122 [2024-10-15 13:51:57.885173] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:44.122 [2024-10-15 13:51:57.890555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.122 [2024-10-15 13:51:57.890660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.122 [2024-10-15 13:51:57.890706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.673 ms 00:17:44.122 [2024-10-15 13:51:57.890731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-10-15 13:51:57.957998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-10-15 13:51:57.958157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:44.385 [2024-10-15 13:51:57.958227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.232 ms 00:17:44.385 [2024-10-15 13:51:57.958257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-10-15 13:51:57.958446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-10-15 13:51:57.958485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.385 [2024-10-15 13:51:57.958533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:17:44.385 [2024-10-15 13:51:57.958547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-10-15 13:51:57.981878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-10-15 13:51:57.981915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:44.385 [2024-10-15 13:51:57.981927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.285 ms 00:17:44.385 [2024-10-15 13:51:57.981937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-10-15 13:51:58.004461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-10-15 13:51:58.004580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:44.385 [2024-10-15 13:51:58.004596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.490 ms 00:17:44.385 [2024-10-15 13:51:58.004606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-10-15 13:51:58.005159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-10-15 13:51:58.005177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.385 [2024-10-15 13:51:58.005186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:17:44.385 [2024-10-15 13:51:58.005196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-10-15 13:51:58.076089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.386 [2024-10-15 13:51:58.076267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:44.386 [2024-10-15 13:51:58.076291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.845 ms 00:17:44.386 [2024-10-15 13:51:58.076302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.386 [2024-10-15 13:51:58.100971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.386 [2024-10-15 13:51:58.101008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:44.386 [2024-10-15 13:51:58.101020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.604 ms 00:17:44.386 [2024-10-15 13:51:58.101030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.386 [2024-10-15 13:51:58.124439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.386 [2024-10-15 13:51:58.124473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:44.386 [2024-10-15 13:51:58.124484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.374 ms 00:17:44.386 [2024-10-15 13:51:58.124493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.386 [2024-10-15 13:51:58.148400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.386 [2024-10-15 13:51:58.148437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.386 [2024-10-15 13:51:58.148448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.876 ms 00:17:44.386 [2024-10-15 13:51:58.148457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.386 [2024-10-15 13:51:58.148493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.386 [2024-10-15 13:51:58.148511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.386 [2024-10-15 13:51:58.148520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:44.386 [2024-10-15 13:51:58.148529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.386 [2024-10-15 13:51:58.148602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.386 [2024-10-15 13:51:58.148615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.386 [2024-10-15 13:51:58.148623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:44.386 [2024-10-15 13:51:58.148632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.386 [2024-10-15 13:51:58.149500] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3055.085 ms, result 0 00:17:44.386 { 00:17:44.386 "name": "ftl0", 00:17:44.386 "uuid": "774d9371-fadf-4590-ab71-e29a26c3b56a" 00:17:44.386 } 00:17:44.386 13:51:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:17:44.387 13:51:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:44.387 13:51:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:17:44.646 13:51:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:44.905 [2024-10-15 13:51:58.469815] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:44.905 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:44.905 Zero copy mechanism will not be used. 00:17:44.905 Running I/O for 4 seconds... 00:17:46.776 1885.00 IOPS, 125.18 MiB/s [2024-10-15T13:52:01.499Z] 1660.00 IOPS, 110.23 MiB/s [2024-10-15T13:52:02.875Z] 1545.00 IOPS, 102.60 MiB/s [2024-10-15T13:52:02.875Z] 1488.00 IOPS, 98.81 MiB/s 00:17:49.087 Latency(us) 00:17:49.087 [2024-10-15T13:52:02.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.087 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:49.087 ftl0 : 4.00 1487.59 98.79 0.00 0.00 704.31 178.02 3528.86 00:17:49.087 [2024-10-15T13:52:02.875Z] =================================================================================================================== 00:17:49.087 [2024-10-15T13:52:02.875Z] Total : 1487.59 98.79 0.00 0.00 704.31 178.02 3528.86 00:17:49.087 { 00:17:49.087 "results": [ 00:17:49.087 { 00:17:49.087 "job": "ftl0", 00:17:49.087 "core_mask": "0x1", 00:17:49.087 "workload": "randwrite", 00:17:49.087 "status": "finished", 00:17:49.087 "queue_depth": 1, 00:17:49.087 "io_size": 69632, 00:17:49.087 "runtime": 4.00177, 00:17:49.087 "iops": 1487.5917406547603, 00:17:49.087 "mibps": 98.78538902785517, 00:17:49.087 "io_failed": 0, 00:17:49.087 "io_timeout": 0, 00:17:49.087 "avg_latency_us": 704.3114447789737, 00:17:49.087 "min_latency_us": 178.01846153846154, 00:17:49.087 "max_latency_us": 3528.8615384615387 00:17:49.087 } 00:17:49.087 ], 00:17:49.087 "core_count": 1 00:17:49.087 } 00:17:49.087 [2024-10-15 13:52:02.479811] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:49.087 13:52:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:49.087 [2024-10-15 13:52:02.582999] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:49.087 Running I/O for 4 seconds... 00:17:50.966 10346.00 IOPS, 40.41 MiB/s [2024-10-15T13:52:05.686Z] 9684.50 IOPS, 37.83 MiB/s [2024-10-15T13:52:06.626Z] 9510.00 IOPS, 37.15 MiB/s [2024-10-15T13:52:06.626Z] 9417.50 IOPS, 36.79 MiB/s 00:17:52.838 Latency(us) 00:17:52.838 [2024-10-15T13:52:06.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.838 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.838 ftl0 : 4.02 9410.07 36.76 0.00 0.00 13573.28 234.73 35893.56 00:17:52.838 [2024-10-15T13:52:06.626Z] =================================================================================================================== 00:17:52.838 [2024-10-15T13:52:06.626Z] Total : 9410.07 36.76 0.00 0.00 13573.28 0.00 35893.56 00:17:52.838 [2024-10-15 13:52:06.608411] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:52.838 { 00:17:52.838 "results": [ 00:17:52.838 { 00:17:52.838 "job": "ftl0", 00:17:52.838 "core_mask": "0x1", 00:17:52.838 "workload": "randwrite", 00:17:52.838 "status": "finished", 00:17:52.838 "queue_depth": 128, 00:17:52.838 "io_size": 4096, 00:17:52.838 "runtime": 4.016231, 00:17:52.838 "iops": 9410.066303457146, 00:17:52.838 "mibps": 36.75807149787948, 00:17:52.838 "io_failed": 0, 00:17:52.838 "io_timeout": 0, 00:17:52.838 "avg_latency_us": 13573.278724387299, 00:17:52.838 "min_latency_us": 234.7323076923077, 00:17:52.838 "max_latency_us": 35893.56307692308 00:17:52.838 } 00:17:52.838 ], 00:17:52.838 "core_count": 1 00:17:52.838 } 00:17:53.100 13:52:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:53.100 [2024-10-15 13:52:06.718441] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:53.100 Running I/O for 4 seconds... 00:17:55.016 7642.00 IOPS, 29.85 MiB/s [2024-10-15T13:52:09.740Z] 8111.50 IOPS, 31.69 MiB/s [2024-10-15T13:52:11.125Z] 8149.00 IOPS, 31.83 MiB/s [2024-10-15T13:52:11.125Z] 7935.00 IOPS, 31.00 MiB/s 00:17:57.337 Latency(us) 00:17:57.337 [2024-10-15T13:52:11.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.337 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:57.337 Verification LBA range: start 0x0 length 0x1400000 00:17:57.337 ftl0 : 4.02 7924.77 30.96 0.00 0.00 16093.39 239.46 33272.12 00:17:57.337 [2024-10-15T13:52:11.125Z] =================================================================================================================== 00:17:57.337 [2024-10-15T13:52:11.125Z] Total : 7924.77 30.96 0.00 0.00 16093.39 0.00 33272.12 00:17:57.337 [2024-10-15 13:52:10.754674] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:57.337 { 00:17:57.337 "results": [ 00:17:57.337 { 00:17:57.337 "job": "ftl0", 00:17:57.337 "core_mask": "0x1", 00:17:57.337 "workload": "verify", 00:17:57.337 "status": "finished", 00:17:57.337 "verify_range": { 00:17:57.337 "start": 0, 00:17:57.337 "length": 20971520 00:17:57.337 }, 00:17:57.337 "queue_depth": 128, 00:17:57.337 "io_size": 4096, 00:17:57.337 "runtime": 4.021314, 00:17:57.337 "iops": 7924.772847880071, 00:17:57.337 "mibps": 30.956143937031527, 00:17:57.337 "io_failed": 0, 00:17:57.337 "io_timeout": 0, 00:17:57.337 "avg_latency_us": 16093.390552954012, 00:17:57.337 "min_latency_us": 239.45846153846153, 00:17:57.337 "max_latency_us": 33272.123076923075 00:17:57.337 } 00:17:57.337 ], 00:17:57.337 "core_count": 1 00:17:57.337 } 00:17:57.337 13:52:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:57.337 [2024-10-15 13:52:10.957300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.337 [2024-10-15 13:52:10.957349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:57.338 [2024-10-15 13:52:10.957363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:57.338 [2024-10-15 13:52:10.957373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.338 [2024-10-15 13:52:10.957398] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:57.338 [2024-10-15 13:52:10.960270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.338 [2024-10-15 13:52:10.960301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:57.338 [2024-10-15 13:52:10.960315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.854 ms 00:17:57.338 [2024-10-15 13:52:10.960324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.338 [2024-10-15 13:52:10.961896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.338 [2024-10-15 13:52:10.961931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:57.338 [2024-10-15 13:52:10.961947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.539 ms 00:17:57.338 [2024-10-15 13:52:10.961955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.600 [2024-10-15 13:52:11.143853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.600 [2024-10-15 13:52:11.144016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:57.600 [2024-10-15 13:52:11.144047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 181.876 ms 00:17:57.600 [2024-10-15 13:52:11.144056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.600 [2024-10-15 13:52:11.150279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.600 [2024-10-15 13:52:11.150386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:57.600 [2024-10-15 13:52:11.150406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.188 ms 00:17:57.600 [2024-10-15 13:52:11.150415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.600 [2024-10-15 13:52:11.174460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.600 [2024-10-15 13:52:11.174493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:57.600 [2024-10-15 13:52:11.174507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.991 ms 00:17:57.600 [2024-10-15 13:52:11.174515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.600 [2024-10-15 13:52:11.190009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.600 [2024-10-15 13:52:11.190132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:57.600 [2024-10-15 13:52:11.190157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.450 ms 00:17:57.600 [2024-10-15 13:52:11.190165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.600 [2024-10-15 13:52:11.190569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.600 [2024-10-15 13:52:11.190595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:57.600 [2024-10-15 13:52:11.190611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:17:57.601 [2024-10-15 13:52:11.190620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.601 [2024-10-15 13:52:11.214672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.601 [2024-10-15 13:52:11.214706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:57.601 [2024-10-15 13:52:11.214720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.035 ms 00:17:57.601 [2024-10-15 13:52:11.214728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.601 [2024-10-15 13:52:11.238675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.601 [2024-10-15 13:52:11.238710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:57.601 [2024-10-15 13:52:11.238723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.909 ms 00:17:57.601 [2024-10-15 13:52:11.238731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.601 [2024-10-15 13:52:11.261938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.601 [2024-10-15 13:52:11.261973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:57.601 [2024-10-15 13:52:11.261987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.170 ms 00:17:57.601 [2024-10-15 13:52:11.261995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.601 [2024-10-15 13:52:11.285816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.601 [2024-10-15 13:52:11.285856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:57.601 [2024-10-15 13:52:11.285873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.745 ms 00:17:57.601 [2024-10-15 13:52:11.285880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.601 [2024-10-15 13:52:11.285921] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:57.601 [2024-10-15 13:52:11.285938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.285950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.285959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.285969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.285977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.285987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.285995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:57.601 [2024-10-15 13:52:11.286717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:57.602 [2024-10-15 13:52:11.286930] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:57.602 [2024-10-15 13:52:11.286940] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 774d9371-fadf-4590-ab71-e29a26c3b56a 00:17:57.602 [2024-10-15 13:52:11.286949] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:57.602 [2024-10-15 13:52:11.286959] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:57.602 [2024-10-15 13:52:11.286966] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:57.602 [2024-10-15 13:52:11.286976] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:57.602 [2024-10-15 13:52:11.286987] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:57.602 [2024-10-15 13:52:11.286997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:57.602 [2024-10-15 13:52:11.287006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:57.602 [2024-10-15 13:52:11.287017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:57.602 [2024-10-15 13:52:11.287024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:57.602 [2024-10-15 13:52:11.287033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.602 [2024-10-15 13:52:11.287042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:57.602 [2024-10-15 13:52:11.287053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:17:57.602 [2024-10-15 13:52:11.287060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.602 [2024-10-15 13:52:11.300884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.602 [2024-10-15 13:52:11.301050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:57.602 [2024-10-15 13:52:11.301077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.785 ms 00:17:57.602 [2024-10-15 13:52:11.301087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.602 [2024-10-15 13:52:11.301526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.602 [2024-10-15 13:52:11.301550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:57.602 [2024-10-15 13:52:11.301563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:17:57.602 [2024-10-15 13:52:11.301571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.602 [2024-10-15 13:52:11.342397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.602 [2024-10-15 13:52:11.342447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.602 [2024-10-15 13:52:11.342469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.602 [2024-10-15 13:52:11.342477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.602 [2024-10-15 13:52:11.342557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.602 [2024-10-15 13:52:11.342566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.602 [2024-10-15 13:52:11.342577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.602 [2024-10-15 13:52:11.342586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.602 [2024-10-15 13:52:11.342699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.602 [2024-10-15 13:52:11.342711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.602 [2024-10-15 13:52:11.342723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.602 [2024-10-15 13:52:11.342734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.602 [2024-10-15 13:52:11.342754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.602 [2024-10-15 13:52:11.342763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.602 [2024-10-15 13:52:11.342774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.602 [2024-10-15 13:52:11.342783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.863 [2024-10-15 13:52:11.434032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.863 [2024-10-15 13:52:11.434103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.863 [2024-10-15 13:52:11.434129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.863 [2024-10-15 13:52:11.434139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.863 [2024-10-15 13:52:11.508699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.863 [2024-10-15 13:52:11.508783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.863 [2024-10-15 13:52:11.508802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.863 [2024-10-15 13:52:11.508812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.863 [2024-10-15 13:52:11.508946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.863 [2024-10-15 13:52:11.508957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.863 [2024-10-15 13:52:11.508971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.863 [2024-10-15 13:52:11.508979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.863 [2024-10-15 13:52:11.509073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.863 [2024-10-15 13:52:11.509084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.863 [2024-10-15 13:52:11.509096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.863 [2024-10-15 13:52:11.509105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.863 [2024-10-15 13:52:11.509254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.863 [2024-10-15 13:52:11.509267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.863 [2024-10-15 13:52:11.509283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.863 [2024-10-15 13:52:11.509293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.863 [2024-10-15 13:52:11.509337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.863 [2024-10-15 13:52:11.509347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:57.863 [2024-10-15 13:52:11.509358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.863 [2024-10-15 13:52:11.509366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.863 [2024-10-15 13:52:11.509424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.863 [2024-10-15 13:52:11.509434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.863 [2024-10-15 13:52:11.509446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.863 [2024-10-15 13:52:11.509455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.863 [2024-10-15 13:52:11.509527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.863 [2024-10-15 13:52:11.509549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.863 [2024-10-15 13:52:11.509560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.863 [2024-10-15 13:52:11.509568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.863 [2024-10-15 13:52:11.509750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 552.378 ms, result 0 00:17:57.863 true 00:17:57.863 13:52:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73251 00:17:57.863 13:52:11 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 73251 ']' 00:17:57.863 13:52:11 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 73251 00:17:57.863 13:52:11 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:17:57.863 13:52:11 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.863 13:52:11 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73251 00:17:57.863 13:52:11 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:57.863 13:52:11 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:57.863 13:52:11 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73251' 00:17:57.863 killing process with pid 73251 00:17:57.863 Received shutdown signal, test time was about 4.000000 seconds 00:17:57.863 00:17:57.863 Latency(us) 00:17:57.863 [2024-10-15T13:52:11.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.863 [2024-10-15T13:52:11.651Z] =================================================================================================================== 00:17:57.863 [2024-10-15T13:52:11.651Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.863 13:52:11 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 73251 00:17:57.863 13:52:11 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 73251 00:18:01.183 13:52:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:01.183 Remove shared memory files 00:18:01.183 13:52:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:01.183 13:52:14 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:01.183 13:52:14 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:01.183 13:52:14 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:01.183 13:52:14 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:01.183 13:52:14 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:01.183 13:52:14 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:01.183 ************************************ 00:18:01.183 END TEST ftl_bdevperf 00:18:01.183 ************************************ 00:18:01.183 00:18:01.183 real 0m23.671s 00:18:01.183 user 0m26.501s 00:18:01.183 sys 0m0.844s 00:18:01.183 13:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:01.183 13:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:01.183 13:52:14 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:01.183 13:52:14 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:01.183 13:52:14 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:01.183 13:52:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:01.183 ************************************ 00:18:01.183 START TEST ftl_trim 00:18:01.183 ************************************ 00:18:01.183 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:01.183 * Looking for test storage... 00:18:01.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:01.183 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:01.183 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:01.183 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:18:01.183 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.183 13:52:14 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:01.183 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.183 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:01.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.183 --rc genhtml_branch_coverage=1 00:18:01.183 --rc genhtml_function_coverage=1 00:18:01.183 --rc genhtml_legend=1 00:18:01.183 --rc geninfo_all_blocks=1 00:18:01.183 --rc geninfo_unexecuted_blocks=1 00:18:01.183 00:18:01.183 ' 00:18:01.183 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:01.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.183 --rc genhtml_branch_coverage=1 00:18:01.183 --rc genhtml_function_coverage=1 00:18:01.183 --rc genhtml_legend=1 00:18:01.183 --rc geninfo_all_blocks=1 00:18:01.183 --rc geninfo_unexecuted_blocks=1 00:18:01.183 00:18:01.183 ' 00:18:01.183 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:01.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.183 --rc genhtml_branch_coverage=1 00:18:01.183 --rc genhtml_function_coverage=1 00:18:01.183 --rc genhtml_legend=1 00:18:01.183 --rc geninfo_all_blocks=1 00:18:01.183 --rc geninfo_unexecuted_blocks=1 00:18:01.183 00:18:01.183 ' 00:18:01.183 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:01.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.183 --rc genhtml_branch_coverage=1 00:18:01.183 --rc genhtml_function_coverage=1 00:18:01.183 --rc genhtml_legend=1 00:18:01.183 --rc geninfo_all_blocks=1 00:18:01.183 --rc geninfo_unexecuted_blocks=1 00:18:01.183 00:18:01.183 ' 00:18:01.183 13:52:14 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73597 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73597 00:18:01.184 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 73597 ']' 00:18:01.184 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.184 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.184 13:52:14 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:01.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.184 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.184 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.184 13:52:14 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:01.445 [2024-10-15 13:52:15.007784] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:18:01.445 [2024-10-15 13:52:15.008027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73597 ] 00:18:01.445 [2024-10-15 13:52:15.150906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:01.706 [2024-10-15 13:52:15.270772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.706 [2024-10-15 13:52:15.270996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.706 [2024-10-15 13:52:15.271053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.278 13:52:15 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.278 13:52:15 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:02.278 13:52:15 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:02.278 13:52:15 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:02.278 13:52:15 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:02.278 13:52:15 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:02.278 13:52:15 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:02.278 13:52:15 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:02.539 13:52:16 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:02.539 13:52:16 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:02.539 13:52:16 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:02.539 13:52:16 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:02.539 13:52:16 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:02.539 13:52:16 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:02.539 13:52:16 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:02.539 13:52:16 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:02.799 13:52:16 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:02.799 { 00:18:02.799 "name": "nvme0n1", 00:18:02.799 "aliases": [ 00:18:02.799 "ad0ee8ae-6b17-4aee-994c-79b978418044" 00:18:02.799 ], 00:18:02.799 "product_name": "NVMe disk", 00:18:02.799 "block_size": 4096, 00:18:02.799 "num_blocks": 1310720, 00:18:02.799 "uuid": "ad0ee8ae-6b17-4aee-994c-79b978418044", 00:18:02.799 "numa_id": -1, 00:18:02.799 "assigned_rate_limits": { 00:18:02.799 "rw_ios_per_sec": 0, 00:18:02.799 "rw_mbytes_per_sec": 0, 00:18:02.799 "r_mbytes_per_sec": 0, 00:18:02.799 "w_mbytes_per_sec": 0 00:18:02.799 }, 00:18:02.799 "claimed": true, 00:18:02.799 "claim_type": "read_many_write_one", 00:18:02.799 "zoned": false, 00:18:02.799 "supported_io_types": { 00:18:02.799 "read": true, 00:18:02.799 "write": true, 00:18:02.799 "unmap": true, 00:18:02.799 "flush": true, 00:18:02.799 "reset": true, 00:18:02.799 "nvme_admin": true, 00:18:02.799 "nvme_io": true, 00:18:02.799 "nvme_io_md": false, 00:18:02.799 "write_zeroes": true, 00:18:02.799 "zcopy": false, 00:18:02.799 "get_zone_info": false, 00:18:02.799 "zone_management": false, 00:18:02.799 "zone_append": false, 00:18:02.799 "compare": true, 00:18:02.799 "compare_and_write": false, 00:18:02.799 "abort": true, 00:18:02.799 "seek_hole": false, 00:18:02.799 "seek_data": false, 00:18:02.799 "copy": true, 00:18:02.799 "nvme_iov_md": false 00:18:02.799 }, 00:18:02.799 "driver_specific": { 00:18:02.799 "nvme": [ 00:18:02.799 { 00:18:02.799 "pci_address": "0000:00:11.0", 00:18:02.799 "trid": { 00:18:02.799 "trtype": "PCIe", 00:18:02.799 "traddr": "0000:00:11.0" 00:18:02.799 }, 00:18:02.799 "ctrlr_data": { 00:18:02.799 "cntlid": 0, 00:18:02.799 "vendor_id": "0x1b36", 00:18:02.799 "model_number": "QEMU NVMe Ctrl", 00:18:02.799 "serial_number": "12341", 00:18:02.799 "firmware_revision": "8.0.0", 00:18:02.799 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:02.799 "oacs": { 00:18:02.799 "security": 0, 00:18:02.799 "format": 1, 00:18:02.799 "firmware": 0, 00:18:02.799 "ns_manage": 1 00:18:02.799 }, 00:18:02.799 "multi_ctrlr": false, 00:18:02.799 "ana_reporting": false 00:18:02.799 }, 00:18:02.799 "vs": { 00:18:02.799 "nvme_version": "1.4" 00:18:02.799 }, 00:18:02.799 "ns_data": { 00:18:02.799 "id": 1, 00:18:02.799 "can_share": false 00:18:02.799 } 00:18:02.799 } 00:18:02.799 ], 00:18:02.799 "mp_policy": "active_passive" 00:18:02.799 } 00:18:02.799 } 00:18:02.799 ]' 00:18:02.799 13:52:16 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:02.799 13:52:16 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:02.799 13:52:16 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:02.799 13:52:16 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:02.799 13:52:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:02.799 13:52:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:18:02.799 13:52:16 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:02.799 13:52:16 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:02.799 13:52:16 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:02.799 13:52:16 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:02.799 13:52:16 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:03.059 13:52:16 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=6bc799bd-f27f-442e-b51b-e3cea4af014d 00:18:03.059 13:52:16 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:03.060 13:52:16 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6bc799bd-f27f-442e-b51b-e3cea4af014d 00:18:03.319 13:52:16 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:03.579 13:52:17 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=7713ee78-7a7e-4418-b50d-a86f814204dd 00:18:03.579 13:52:17 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7713ee78-7a7e-4418-b50d-a86f814204dd 00:18:03.840 13:52:17 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=7540de88-a974-407e-987f-03c9940fc0d8 00:18:03.840 13:52:17 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7540de88-a974-407e-987f-03c9940fc0d8 00:18:03.840 13:52:17 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:03.840 13:52:17 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:03.840 13:52:17 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=7540de88-a974-407e-987f-03c9940fc0d8 00:18:03.840 13:52:17 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:03.840 13:52:17 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 7540de88-a974-407e-987f-03c9940fc0d8 00:18:03.840 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=7540de88-a974-407e-987f-03c9940fc0d8 00:18:03.840 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:03.840 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:03.840 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:03.840 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7540de88-a974-407e-987f-03c9940fc0d8 00:18:04.100 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:04.100 { 00:18:04.100 "name": "7540de88-a974-407e-987f-03c9940fc0d8", 00:18:04.100 "aliases": [ 00:18:04.100 "lvs/nvme0n1p0" 00:18:04.100 ], 00:18:04.100 "product_name": "Logical Volume", 00:18:04.100 "block_size": 4096, 00:18:04.100 "num_blocks": 26476544, 00:18:04.100 "uuid": "7540de88-a974-407e-987f-03c9940fc0d8", 00:18:04.100 "assigned_rate_limits": { 00:18:04.100 "rw_ios_per_sec": 0, 00:18:04.100 "rw_mbytes_per_sec": 0, 00:18:04.100 "r_mbytes_per_sec": 0, 00:18:04.100 "w_mbytes_per_sec": 0 00:18:04.100 }, 00:18:04.100 "claimed": false, 00:18:04.100 "zoned": false, 00:18:04.100 "supported_io_types": { 00:18:04.100 "read": true, 00:18:04.100 "write": true, 00:18:04.100 "unmap": true, 00:18:04.100 "flush": false, 00:18:04.100 "reset": true, 00:18:04.100 "nvme_admin": false, 00:18:04.100 "nvme_io": false, 00:18:04.100 "nvme_io_md": false, 00:18:04.100 "write_zeroes": true, 00:18:04.100 "zcopy": false, 00:18:04.100 "get_zone_info": false, 00:18:04.100 "zone_management": false, 00:18:04.100 "zone_append": false, 00:18:04.100 "compare": false, 00:18:04.100 "compare_and_write": false, 00:18:04.100 "abort": false, 00:18:04.100 "seek_hole": true, 00:18:04.100 "seek_data": true, 00:18:04.100 "copy": false, 00:18:04.100 "nvme_iov_md": false 00:18:04.100 }, 00:18:04.100 "driver_specific": { 00:18:04.100 "lvol": { 00:18:04.100 "lvol_store_uuid": "7713ee78-7a7e-4418-b50d-a86f814204dd", 00:18:04.100 "base_bdev": "nvme0n1", 00:18:04.100 "thin_provision": true, 00:18:04.100 "num_allocated_clusters": 0, 00:18:04.100 "snapshot": false, 00:18:04.100 "clone": false, 00:18:04.100 "esnap_clone": false 00:18:04.100 } 00:18:04.100 } 00:18:04.100 } 00:18:04.100 ]' 00:18:04.100 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:04.100 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:04.100 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:04.100 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:04.100 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:04.100 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:04.100 13:52:17 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:04.100 13:52:17 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:04.100 13:52:17 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:04.358 13:52:17 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:04.359 13:52:17 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:04.359 13:52:17 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 7540de88-a974-407e-987f-03c9940fc0d8 00:18:04.359 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=7540de88-a974-407e-987f-03c9940fc0d8 00:18:04.359 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:04.359 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:04.359 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:04.359 13:52:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7540de88-a974-407e-987f-03c9940fc0d8 00:18:04.617 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:04.617 { 00:18:04.617 "name": "7540de88-a974-407e-987f-03c9940fc0d8", 00:18:04.617 "aliases": [ 00:18:04.617 "lvs/nvme0n1p0" 00:18:04.617 ], 00:18:04.617 "product_name": "Logical Volume", 00:18:04.617 "block_size": 4096, 00:18:04.617 "num_blocks": 26476544, 00:18:04.617 "uuid": "7540de88-a974-407e-987f-03c9940fc0d8", 00:18:04.617 "assigned_rate_limits": { 00:18:04.617 "rw_ios_per_sec": 0, 00:18:04.617 "rw_mbytes_per_sec": 0, 00:18:04.617 "r_mbytes_per_sec": 0, 00:18:04.617 "w_mbytes_per_sec": 0 00:18:04.617 }, 00:18:04.617 "claimed": false, 00:18:04.617 "zoned": false, 00:18:04.617 "supported_io_types": { 00:18:04.617 "read": true, 00:18:04.617 "write": true, 00:18:04.617 "unmap": true, 00:18:04.617 "flush": false, 00:18:04.617 "reset": true, 00:18:04.617 "nvme_admin": false, 00:18:04.617 "nvme_io": false, 00:18:04.617 "nvme_io_md": false, 00:18:04.617 "write_zeroes": true, 00:18:04.617 "zcopy": false, 00:18:04.617 "get_zone_info": false, 00:18:04.617 "zone_management": false, 00:18:04.617 "zone_append": false, 00:18:04.617 "compare": false, 00:18:04.617 "compare_and_write": false, 00:18:04.617 "abort": false, 00:18:04.617 "seek_hole": true, 00:18:04.617 "seek_data": true, 00:18:04.617 "copy": false, 00:18:04.617 "nvme_iov_md": false 00:18:04.617 }, 00:18:04.617 "driver_specific": { 00:18:04.617 "lvol": { 00:18:04.617 "lvol_store_uuid": "7713ee78-7a7e-4418-b50d-a86f814204dd", 00:18:04.617 "base_bdev": "nvme0n1", 00:18:04.617 "thin_provision": true, 00:18:04.617 "num_allocated_clusters": 0, 00:18:04.617 "snapshot": false, 00:18:04.617 "clone": false, 00:18:04.617 "esnap_clone": false 00:18:04.617 } 00:18:04.617 } 00:18:04.617 } 00:18:04.617 ]' 00:18:04.617 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:04.617 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:04.617 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:04.617 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:04.617 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:04.617 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:04.617 13:52:18 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:04.617 13:52:18 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:04.875 13:52:18 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:04.875 13:52:18 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:04.875 13:52:18 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 7540de88-a974-407e-987f-03c9940fc0d8 00:18:04.875 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=7540de88-a974-407e-987f-03c9940fc0d8 00:18:04.875 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:04.875 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:04.875 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:04.875 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7540de88-a974-407e-987f-03c9940fc0d8 00:18:05.133 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:05.133 { 00:18:05.133 "name": "7540de88-a974-407e-987f-03c9940fc0d8", 00:18:05.133 "aliases": [ 00:18:05.133 "lvs/nvme0n1p0" 00:18:05.133 ], 00:18:05.133 "product_name": "Logical Volume", 00:18:05.133 "block_size": 4096, 00:18:05.133 "num_blocks": 26476544, 00:18:05.133 "uuid": "7540de88-a974-407e-987f-03c9940fc0d8", 00:18:05.133 "assigned_rate_limits": { 00:18:05.133 "rw_ios_per_sec": 0, 00:18:05.133 "rw_mbytes_per_sec": 0, 00:18:05.133 "r_mbytes_per_sec": 0, 00:18:05.133 "w_mbytes_per_sec": 0 00:18:05.133 }, 00:18:05.133 "claimed": false, 00:18:05.133 "zoned": false, 00:18:05.133 "supported_io_types": { 00:18:05.133 "read": true, 00:18:05.133 "write": true, 00:18:05.133 "unmap": true, 00:18:05.133 "flush": false, 00:18:05.133 "reset": true, 00:18:05.133 "nvme_admin": false, 00:18:05.133 "nvme_io": false, 00:18:05.133 "nvme_io_md": false, 00:18:05.133 "write_zeroes": true, 00:18:05.133 "zcopy": false, 00:18:05.133 "get_zone_info": false, 00:18:05.133 "zone_management": false, 00:18:05.133 "zone_append": false, 00:18:05.133 "compare": false, 00:18:05.133 "compare_and_write": false, 00:18:05.133 "abort": false, 00:18:05.133 "seek_hole": true, 00:18:05.133 "seek_data": true, 00:18:05.133 "copy": false, 00:18:05.133 "nvme_iov_md": false 00:18:05.133 }, 00:18:05.133 "driver_specific": { 00:18:05.133 "lvol": { 00:18:05.133 "lvol_store_uuid": "7713ee78-7a7e-4418-b50d-a86f814204dd", 00:18:05.133 "base_bdev": "nvme0n1", 00:18:05.133 "thin_provision": true, 00:18:05.133 "num_allocated_clusters": 0, 00:18:05.133 "snapshot": false, 00:18:05.133 "clone": false, 00:18:05.133 "esnap_clone": false 00:18:05.133 } 00:18:05.133 } 00:18:05.133 } 00:18:05.133 ]' 00:18:05.133 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:05.133 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:05.133 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:05.133 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:05.133 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:05.133 13:52:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:05.133 13:52:18 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:05.133 13:52:18 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7540de88-a974-407e-987f-03c9940fc0d8 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:05.391 [2024-10-15 13:52:18.935284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.391 [2024-10-15 13:52:18.935337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:05.391 [2024-10-15 13:52:18.935353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:05.391 [2024-10-15 13:52:18.935361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.391 [2024-10-15 13:52:18.937838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.391 [2024-10-15 13:52:18.937868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:05.391 [2024-10-15 13:52:18.937880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.453 ms 00:18:05.391 [2024-10-15 13:52:18.937887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.391 [2024-10-15 13:52:18.937998] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:05.391 [2024-10-15 13:52:18.938621] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:05.391 [2024-10-15 13:52:18.938647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.391 [2024-10-15 13:52:18.938654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:05.391 [2024-10-15 13:52:18.938663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:18:05.391 [2024-10-15 13:52:18.938670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.391 [2024-10-15 13:52:18.938802] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6f658c17-becd-47e5-8bb1-b864f80d9f09 00:18:05.391 [2024-10-15 13:52:18.940111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.391 [2024-10-15 13:52:18.940141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:05.391 [2024-10-15 13:52:18.940152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:05.391 [2024-10-15 13:52:18.940162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.391 [2024-10-15 13:52:18.947105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.391 [2024-10-15 13:52:18.947129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:05.391 [2024-10-15 13:52:18.947137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.864 ms 00:18:05.391 [2024-10-15 13:52:18.947145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.391 [2024-10-15 13:52:18.947261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.391 [2024-10-15 13:52:18.947277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:05.391 [2024-10-15 13:52:18.947284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:05.391 [2024-10-15 13:52:18.947294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.391 [2024-10-15 13:52:18.947332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.391 [2024-10-15 13:52:18.947344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:05.391 [2024-10-15 13:52:18.947351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:05.391 [2024-10-15 13:52:18.947359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.391 [2024-10-15 13:52:18.947388] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:05.391 [2024-10-15 13:52:18.950643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.391 [2024-10-15 13:52:18.950670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:05.391 [2024-10-15 13:52:18.950679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:18:05.391 [2024-10-15 13:52:18.950685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.391 [2024-10-15 13:52:18.950739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.391 [2024-10-15 13:52:18.950747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:05.391 [2024-10-15 13:52:18.950755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:05.391 [2024-10-15 13:52:18.950774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.391 [2024-10-15 13:52:18.950798] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:05.391 [2024-10-15 13:52:18.950912] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:05.392 [2024-10-15 13:52:18.950930] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:05.392 [2024-10-15 13:52:18.950938] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:05.392 [2024-10-15 13:52:18.950948] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:05.392 [2024-10-15 13:52:18.950956] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:05.392 [2024-10-15 13:52:18.950964] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:05.392 [2024-10-15 13:52:18.950970] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:05.392 [2024-10-15 13:52:18.950978] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:05.392 [2024-10-15 13:52:18.950984] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:05.392 [2024-10-15 13:52:18.950991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.392 [2024-10-15 13:52:18.950997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:05.392 [2024-10-15 13:52:18.951006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:18:05.392 [2024-10-15 13:52:18.951012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.392 [2024-10-15 13:52:18.951096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.392 [2024-10-15 13:52:18.951107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:05.392 [2024-10-15 13:52:18.951115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:05.392 [2024-10-15 13:52:18.951121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.392 [2024-10-15 13:52:18.951229] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:05.392 [2024-10-15 13:52:18.951237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:05.392 [2024-10-15 13:52:18.951245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:05.392 [2024-10-15 13:52:18.951253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.392 [2024-10-15 13:52:18.951261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:05.392 [2024-10-15 13:52:18.951266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:05.392 [2024-10-15 13:52:18.951273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:05.392 [2024-10-15 13:52:18.951278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:05.392 [2024-10-15 13:52:18.951286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:05.392 [2024-10-15 13:52:18.951291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:05.392 [2024-10-15 13:52:18.951299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:05.392 [2024-10-15 13:52:18.951304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:05.392 [2024-10-15 13:52:18.951311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:05.392 [2024-10-15 13:52:18.951316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:05.392 [2024-10-15 13:52:18.951323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:05.392 [2024-10-15 13:52:18.951328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.392 [2024-10-15 13:52:18.951337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:05.392 [2024-10-15 13:52:18.951342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:05.392 [2024-10-15 13:52:18.951349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.392 [2024-10-15 13:52:18.951354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:05.392 [2024-10-15 13:52:18.951362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:05.392 [2024-10-15 13:52:18.951367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.392 [2024-10-15 13:52:18.951373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:05.392 [2024-10-15 13:52:18.951378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:05.392 [2024-10-15 13:52:18.951385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.392 [2024-10-15 13:52:18.951390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:05.392 [2024-10-15 13:52:18.951396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:05.392 [2024-10-15 13:52:18.951401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.392 [2024-10-15 13:52:18.951408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:05.392 [2024-10-15 13:52:18.951413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:05.392 [2024-10-15 13:52:18.951420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.392 [2024-10-15 13:52:18.951425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:05.392 [2024-10-15 13:52:18.951433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:05.392 [2024-10-15 13:52:18.951438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:05.392 [2024-10-15 13:52:18.951445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:05.392 [2024-10-15 13:52:18.951450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:05.392 [2024-10-15 13:52:18.951456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:05.392 [2024-10-15 13:52:18.951461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:05.392 [2024-10-15 13:52:18.951468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:05.392 [2024-10-15 13:52:18.951473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.392 [2024-10-15 13:52:18.951486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:05.392 [2024-10-15 13:52:18.951491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:05.392 [2024-10-15 13:52:18.951497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.392 [2024-10-15 13:52:18.951502] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:05.392 [2024-10-15 13:52:18.951509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:05.392 [2024-10-15 13:52:18.951515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:05.392 [2024-10-15 13:52:18.951522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.392 [2024-10-15 13:52:18.951528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:05.392 [2024-10-15 13:52:18.951537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:05.392 [2024-10-15 13:52:18.951542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:05.392 [2024-10-15 13:52:18.951549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:05.392 [2024-10-15 13:52:18.951554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:05.392 [2024-10-15 13:52:18.951561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:05.392 [2024-10-15 13:52:18.951569] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:05.392 [2024-10-15 13:52:18.951578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:05.392 [2024-10-15 13:52:18.951585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:05.392 [2024-10-15 13:52:18.951592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:05.392 [2024-10-15 13:52:18.951598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:05.392 [2024-10-15 13:52:18.951605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:05.392 [2024-10-15 13:52:18.951610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:05.392 [2024-10-15 13:52:18.951617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:05.392 [2024-10-15 13:52:18.951622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:05.392 [2024-10-15 13:52:18.951629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:05.392 [2024-10-15 13:52:18.951634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:05.392 [2024-10-15 13:52:18.951643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:05.392 [2024-10-15 13:52:18.951648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:05.392 [2024-10-15 13:52:18.951655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:05.392 [2024-10-15 13:52:18.951660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:05.392 [2024-10-15 13:52:18.951667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:05.392 [2024-10-15 13:52:18.951673] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:05.392 [2024-10-15 13:52:18.951680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:05.392 [2024-10-15 13:52:18.951686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:05.392 [2024-10-15 13:52:18.951698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:05.392 [2024-10-15 13:52:18.951704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:05.392 [2024-10-15 13:52:18.951711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:05.392 [2024-10-15 13:52:18.951717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.392 [2024-10-15 13:52:18.951728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:05.392 [2024-10-15 13:52:18.951734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:18:05.392 [2024-10-15 13:52:18.951741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.392 [2024-10-15 13:52:18.951822] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:05.392 [2024-10-15 13:52:18.951834] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:07.920 [2024-10-15 13:52:21.520804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.920 [2024-10-15 13:52:21.520875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:07.920 [2024-10-15 13:52:21.520894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2568.969 ms 00:18:07.920 [2024-10-15 13:52:21.520905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.920 [2024-10-15 13:52:21.548996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.920 [2024-10-15 13:52:21.549050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:07.920 [2024-10-15 13:52:21.549064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.797 ms 00:18:07.920 [2024-10-15 13:52:21.549075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.920 [2024-10-15 13:52:21.549231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.920 [2024-10-15 13:52:21.549244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:07.920 [2024-10-15 13:52:21.549253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:07.920 [2024-10-15 13:52:21.549265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.920 [2024-10-15 13:52:21.594270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.920 [2024-10-15 13:52:21.594333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:07.920 [2024-10-15 13:52:21.594352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.953 ms 00:18:07.920 [2024-10-15 13:52:21.594371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.920 [2024-10-15 13:52:21.594510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.920 [2024-10-15 13:52:21.594530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:07.920 [2024-10-15 13:52:21.594542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:07.920 [2024-10-15 13:52:21.594555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.920 [2024-10-15 13:52:21.595006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.920 [2024-10-15 13:52:21.595038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:07.920 [2024-10-15 13:52:21.595051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:18:07.920 [2024-10-15 13:52:21.595066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.920 [2024-10-15 13:52:21.595244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.920 [2024-10-15 13:52:21.595265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:07.920 [2024-10-15 13:52:21.595277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:18:07.920 [2024-10-15 13:52:21.595293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.920 [2024-10-15 13:52:21.612253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.920 [2024-10-15 13:52:21.612311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:07.920 [2024-10-15 13:52:21.612322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.900 ms 00:18:07.920 [2024-10-15 13:52:21.612333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.920 [2024-10-15 13:52:21.624598] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:07.920 [2024-10-15 13:52:21.641770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.920 [2024-10-15 13:52:21.641801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:07.920 [2024-10-15 13:52:21.641815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.334 ms 00:18:07.920 [2024-10-15 13:52:21.641824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.178 [2024-10-15 13:52:21.709048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.178 [2024-10-15 13:52:21.709085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:08.178 [2024-10-15 13:52:21.709099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.137 ms 00:18:08.178 [2024-10-15 13:52:21.709110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.178 [2024-10-15 13:52:21.709334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.178 [2024-10-15 13:52:21.709352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:08.178 [2024-10-15 13:52:21.709366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:18:08.178 [2024-10-15 13:52:21.709375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.178 [2024-10-15 13:52:21.731876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.178 [2024-10-15 13:52:21.731906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:08.178 [2024-10-15 13:52:21.731922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.465 ms 00:18:08.178 [2024-10-15 13:52:21.731945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.178 [2024-10-15 13:52:21.753889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.178 [2024-10-15 13:52:21.753918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:08.178 [2024-10-15 13:52:21.753931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.895 ms 00:18:08.178 [2024-10-15 13:52:21.753939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.178 [2024-10-15 13:52:21.754544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.178 [2024-10-15 13:52:21.754563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:08.178 [2024-10-15 13:52:21.754575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:18:08.178 [2024-10-15 13:52:21.754582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.178 [2024-10-15 13:52:21.822395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.178 [2024-10-15 13:52:21.822428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:08.178 [2024-10-15 13:52:21.822443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.782 ms 00:18:08.178 [2024-10-15 13:52:21.822451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.178 [2024-10-15 13:52:21.846814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.178 [2024-10-15 13:52:21.846846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:08.178 [2024-10-15 13:52:21.846859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.257 ms 00:18:08.178 [2024-10-15 13:52:21.846868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.178 [2024-10-15 13:52:21.869777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.178 [2024-10-15 13:52:21.869808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:08.178 [2024-10-15 13:52:21.869820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.852 ms 00:18:08.178 [2024-10-15 13:52:21.869827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.178 [2024-10-15 13:52:21.892633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.178 [2024-10-15 13:52:21.892664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:08.178 [2024-10-15 13:52:21.892675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.732 ms 00:18:08.178 [2024-10-15 13:52:21.892698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.178 [2024-10-15 13:52:21.892763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.178 [2024-10-15 13:52:21.892775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:08.178 [2024-10-15 13:52:21.892787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:08.178 [2024-10-15 13:52:21.892795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.178 [2024-10-15 13:52:21.892878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.178 [2024-10-15 13:52:21.892887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:08.178 [2024-10-15 13:52:21.892897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:08.178 [2024-10-15 13:52:21.892904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.178 [2024-10-15 13:52:21.893805] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:08.179 [2024-10-15 13:52:21.896871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2958.231 ms, result 0 00:18:08.179 [2024-10-15 13:52:21.897787] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:08.179 { 00:18:08.179 "name": "ftl0", 00:18:08.179 "uuid": "6f658c17-becd-47e5-8bb1-b864f80d9f09" 00:18:08.179 } 00:18:08.179 13:52:21 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:08.179 13:52:21 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:18:08.179 13:52:21 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:08.179 13:52:21 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:18:08.179 13:52:21 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:08.179 13:52:21 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:08.179 13:52:21 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:08.437 13:52:22 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:08.751 [ 00:18:08.751 { 00:18:08.751 "name": "ftl0", 00:18:08.751 "aliases": [ 00:18:08.751 "6f658c17-becd-47e5-8bb1-b864f80d9f09" 00:18:08.751 ], 00:18:08.751 "product_name": "FTL disk", 00:18:08.751 "block_size": 4096, 00:18:08.751 "num_blocks": 23592960, 00:18:08.751 "uuid": "6f658c17-becd-47e5-8bb1-b864f80d9f09", 00:18:08.751 "assigned_rate_limits": { 00:18:08.751 "rw_ios_per_sec": 0, 00:18:08.751 "rw_mbytes_per_sec": 0, 00:18:08.751 "r_mbytes_per_sec": 0, 00:18:08.751 "w_mbytes_per_sec": 0 00:18:08.751 }, 00:18:08.751 "claimed": false, 00:18:08.751 "zoned": false, 00:18:08.751 "supported_io_types": { 00:18:08.751 "read": true, 00:18:08.751 "write": true, 00:18:08.751 "unmap": true, 00:18:08.751 "flush": true, 00:18:08.751 "reset": false, 00:18:08.751 "nvme_admin": false, 00:18:08.751 "nvme_io": false, 00:18:08.752 "nvme_io_md": false, 00:18:08.752 "write_zeroes": true, 00:18:08.752 "zcopy": false, 00:18:08.752 "get_zone_info": false, 00:18:08.752 "zone_management": false, 00:18:08.752 "zone_append": false, 00:18:08.752 "compare": false, 00:18:08.752 "compare_and_write": false, 00:18:08.752 "abort": false, 00:18:08.752 "seek_hole": false, 00:18:08.752 "seek_data": false, 00:18:08.752 "copy": false, 00:18:08.752 "nvme_iov_md": false 00:18:08.752 }, 00:18:08.752 "driver_specific": { 00:18:08.752 "ftl": { 00:18:08.752 "base_bdev": "7540de88-a974-407e-987f-03c9940fc0d8", 00:18:08.752 "cache": "nvc0n1p0" 00:18:08.752 } 00:18:08.752 } 00:18:08.752 } 00:18:08.752 ] 00:18:08.752 13:52:22 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:18:08.752 13:52:22 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:08.752 13:52:22 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:08.752 13:52:22 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:08.752 13:52:22 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:09.010 13:52:22 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:09.010 { 00:18:09.010 "name": "ftl0", 00:18:09.010 "aliases": [ 00:18:09.010 "6f658c17-becd-47e5-8bb1-b864f80d9f09" 00:18:09.010 ], 00:18:09.010 "product_name": "FTL disk", 00:18:09.010 "block_size": 4096, 00:18:09.010 "num_blocks": 23592960, 00:18:09.010 "uuid": "6f658c17-becd-47e5-8bb1-b864f80d9f09", 00:18:09.010 "assigned_rate_limits": { 00:18:09.010 "rw_ios_per_sec": 0, 00:18:09.010 "rw_mbytes_per_sec": 0, 00:18:09.010 "r_mbytes_per_sec": 0, 00:18:09.010 "w_mbytes_per_sec": 0 00:18:09.010 }, 00:18:09.010 "claimed": false, 00:18:09.010 "zoned": false, 00:18:09.010 "supported_io_types": { 00:18:09.010 "read": true, 00:18:09.010 "write": true, 00:18:09.010 "unmap": true, 00:18:09.010 "flush": true, 00:18:09.010 "reset": false, 00:18:09.010 "nvme_admin": false, 00:18:09.010 "nvme_io": false, 00:18:09.010 "nvme_io_md": false, 00:18:09.010 "write_zeroes": true, 00:18:09.010 "zcopy": false, 00:18:09.010 "get_zone_info": false, 00:18:09.010 "zone_management": false, 00:18:09.010 "zone_append": false, 00:18:09.010 "compare": false, 00:18:09.010 "compare_and_write": false, 00:18:09.010 "abort": false, 00:18:09.010 "seek_hole": false, 00:18:09.010 "seek_data": false, 00:18:09.010 "copy": false, 00:18:09.010 "nvme_iov_md": false 00:18:09.010 }, 00:18:09.010 "driver_specific": { 00:18:09.010 "ftl": { 00:18:09.010 "base_bdev": "7540de88-a974-407e-987f-03c9940fc0d8", 00:18:09.010 "cache": "nvc0n1p0" 00:18:09.010 } 00:18:09.010 } 00:18:09.010 } 00:18:09.010 ]' 00:18:09.010 13:52:22 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:09.010 13:52:22 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:09.010 13:52:22 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:09.269 [2024-10-15 13:52:22.937447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.269 [2024-10-15 13:52:22.937509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:09.269 [2024-10-15 13:52:22.937523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:09.269 [2024-10-15 13:52:22.937534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.269 [2024-10-15 13:52:22.937572] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:09.269 [2024-10-15 13:52:22.940374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.269 [2024-10-15 13:52:22.940408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:09.269 [2024-10-15 13:52:22.940426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.781 ms 00:18:09.269 [2024-10-15 13:52:22.940436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.269 [2024-10-15 13:52:22.941017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.269 [2024-10-15 13:52:22.941034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:09.269 [2024-10-15 13:52:22.941046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:18:09.269 [2024-10-15 13:52:22.941054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.269 [2024-10-15 13:52:22.944724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.269 [2024-10-15 13:52:22.944747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:09.269 [2024-10-15 13:52:22.944758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.639 ms 00:18:09.269 [2024-10-15 13:52:22.944770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.269 [2024-10-15 13:52:22.951764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.269 [2024-10-15 13:52:22.951791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:09.269 [2024-10-15 13:52:22.951803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.946 ms 00:18:09.269 [2024-10-15 13:52:22.951811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.269 [2024-10-15 13:52:22.975775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.269 [2024-10-15 13:52:22.975806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:09.269 [2024-10-15 13:52:22.975823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.871 ms 00:18:09.269 [2024-10-15 13:52:22.975832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.269 [2024-10-15 13:52:22.991194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.269 [2024-10-15 13:52:22.991240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:09.269 [2024-10-15 13:52:22.991253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.304 ms 00:18:09.269 [2024-10-15 13:52:22.991262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.269 [2024-10-15 13:52:22.991474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.269 [2024-10-15 13:52:22.991493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:09.269 [2024-10-15 13:52:22.991504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:18:09.269 [2024-10-15 13:52:22.991511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.269 [2024-10-15 13:52:23.014412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.269 [2024-10-15 13:52:23.014447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:09.269 [2024-10-15 13:52:23.014460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.869 ms 00:18:09.269 [2024-10-15 13:52:23.014468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.269 [2024-10-15 13:52:23.036645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.269 [2024-10-15 13:52:23.036678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:09.269 [2024-10-15 13:52:23.036693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.115 ms 00:18:09.269 [2024-10-15 13:52:23.036700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.528 [2024-10-15 13:52:23.058787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.529 [2024-10-15 13:52:23.058817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:09.529 [2024-10-15 13:52:23.058829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.026 ms 00:18:09.529 [2024-10-15 13:52:23.058838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.529 [2024-10-15 13:52:23.080639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.529 [2024-10-15 13:52:23.080668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:09.529 [2024-10-15 13:52:23.080680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.691 ms 00:18:09.529 [2024-10-15 13:52:23.080687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.529 [2024-10-15 13:52:23.080744] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:09.529 [2024-10-15 13:52:23.080760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.080999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:09.529 [2024-10-15 13:52:23.081498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:09.530 [2024-10-15 13:52:23.081652] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:09.530 [2024-10-15 13:52:23.081663] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f658c17-becd-47e5-8bb1-b864f80d9f09 00:18:09.530 [2024-10-15 13:52:23.081671] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:09.530 [2024-10-15 13:52:23.081679] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:09.530 [2024-10-15 13:52:23.081686] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:09.530 [2024-10-15 13:52:23.081695] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:09.530 [2024-10-15 13:52:23.081701] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:09.530 [2024-10-15 13:52:23.081710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:09.530 [2024-10-15 13:52:23.081717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:09.530 [2024-10-15 13:52:23.081725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:09.530 [2024-10-15 13:52:23.081732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:09.530 [2024-10-15 13:52:23.081740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.530 [2024-10-15 13:52:23.081750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:09.530 [2024-10-15 13:52:23.081759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:18:09.530 [2024-10-15 13:52:23.081766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.094737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.530 [2024-10-15 13:52:23.094768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:09.530 [2024-10-15 13:52:23.094783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.940 ms 00:18:09.530 [2024-10-15 13:52:23.094792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.095183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.530 [2024-10-15 13:52:23.095202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:09.530 [2024-10-15 13:52:23.095212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:18:09.530 [2024-10-15 13:52:23.095254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.141252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.530 [2024-10-15 13:52:23.141429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:09.530 [2024-10-15 13:52:23.141450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.530 [2024-10-15 13:52:23.141458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.141580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.530 [2024-10-15 13:52:23.141591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:09.530 [2024-10-15 13:52:23.141601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.530 [2024-10-15 13:52:23.141608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.141676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.530 [2024-10-15 13:52:23.141686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:09.530 [2024-10-15 13:52:23.141699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.530 [2024-10-15 13:52:23.141707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.141742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.530 [2024-10-15 13:52:23.141750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:09.530 [2024-10-15 13:52:23.141760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.530 [2024-10-15 13:52:23.141767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.227279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.530 [2024-10-15 13:52:23.227320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:09.530 [2024-10-15 13:52:23.227333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.530 [2024-10-15 13:52:23.227342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.292098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.530 [2024-10-15 13:52:23.292147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:09.530 [2024-10-15 13:52:23.292160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.530 [2024-10-15 13:52:23.292169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.292319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.530 [2024-10-15 13:52:23.292331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:09.530 [2024-10-15 13:52:23.292358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.530 [2024-10-15 13:52:23.292366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.292427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.530 [2024-10-15 13:52:23.292440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:09.530 [2024-10-15 13:52:23.292450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.530 [2024-10-15 13:52:23.292457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.292577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.530 [2024-10-15 13:52:23.292587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:09.530 [2024-10-15 13:52:23.292598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.530 [2024-10-15 13:52:23.292606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.292660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.530 [2024-10-15 13:52:23.292675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:09.530 [2024-10-15 13:52:23.292687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.530 [2024-10-15 13:52:23.292695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.292751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.530 [2024-10-15 13:52:23.292759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:09.530 [2024-10-15 13:52:23.292771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.530 [2024-10-15 13:52:23.292781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.292838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.530 [2024-10-15 13:52:23.292847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:09.530 [2024-10-15 13:52:23.292859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.530 [2024-10-15 13:52:23.292866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.530 [2024-10-15 13:52:23.293063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 355.603 ms, result 0 00:18:09.530 true 00:18:09.789 13:52:23 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73597 00:18:09.789 13:52:23 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73597 ']' 00:18:09.789 13:52:23 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73597 00:18:09.789 13:52:23 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:09.789 13:52:23 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.789 13:52:23 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73597 00:18:09.789 killing process with pid 73597 00:18:09.789 13:52:23 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:09.789 13:52:23 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:09.789 13:52:23 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73597' 00:18:09.789 13:52:23 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 73597 00:18:09.789 13:52:23 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 73597 00:18:16.346 13:52:29 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:17.279 65536+0 records in 00:18:17.279 65536+0 records out 00:18:17.279 268435456 bytes (268 MB, 256 MiB) copied, 1.0676 s, 251 MB/s 00:18:17.279 13:52:30 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:17.279 [2024-10-15 13:52:30.812112] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:18:17.280 [2024-10-15 13:52:30.812204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73784 ] 00:18:17.280 [2024-10-15 13:52:30.956587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.537 [2024-10-15 13:52:31.075926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.797 [2024-10-15 13:52:31.350291] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:17.797 [2024-10-15 13:52:31.350361] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:17.797 [2024-10-15 13:52:31.510272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.797 [2024-10-15 13:52:31.510330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:17.797 [2024-10-15 13:52:31.510345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:17.797 [2024-10-15 13:52:31.510354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.797 [2024-10-15 13:52:31.513157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.797 [2024-10-15 13:52:31.513348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.797 [2024-10-15 13:52:31.513365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.782 ms 00:18:17.797 [2024-10-15 13:52:31.513374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.797 [2024-10-15 13:52:31.513461] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:17.797 [2024-10-15 13:52:31.514190] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:17.797 [2024-10-15 13:52:31.514211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.797 [2024-10-15 13:52:31.514234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.797 [2024-10-15 13:52:31.514243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:18:17.797 [2024-10-15 13:52:31.514251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.797 [2024-10-15 13:52:31.516002] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:17.797 [2024-10-15 13:52:31.529336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.797 [2024-10-15 13:52:31.529370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:17.797 [2024-10-15 13:52:31.529383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.337 ms 00:18:17.797 [2024-10-15 13:52:31.529396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.797 [2024-10-15 13:52:31.529482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.797 [2024-10-15 13:52:31.529495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:17.797 [2024-10-15 13:52:31.529504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:17.797 [2024-10-15 13:52:31.529513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.797 [2024-10-15 13:52:31.535950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.797 [2024-10-15 13:52:31.535980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.797 [2024-10-15 13:52:31.535990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.394 ms 00:18:17.797 [2024-10-15 13:52:31.535998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.797 [2024-10-15 13:52:31.536089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.797 [2024-10-15 13:52:31.536099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.797 [2024-10-15 13:52:31.536107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:17.797 [2024-10-15 13:52:31.536115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.797 [2024-10-15 13:52:31.536141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.797 [2024-10-15 13:52:31.536150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:17.797 [2024-10-15 13:52:31.536158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:17.797 [2024-10-15 13:52:31.536168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.797 [2024-10-15 13:52:31.536190] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:17.797 [2024-10-15 13:52:31.539916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.797 [2024-10-15 13:52:31.539950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.797 [2024-10-15 13:52:31.539961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.733 ms 00:18:17.797 [2024-10-15 13:52:31.539969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.797 [2024-10-15 13:52:31.540019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.797 [2024-10-15 13:52:31.540029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:17.797 [2024-10-15 13:52:31.540037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:17.797 [2024-10-15 13:52:31.540046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.797 [2024-10-15 13:52:31.540075] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:17.797 [2024-10-15 13:52:31.540095] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:17.797 [2024-10-15 13:52:31.540134] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:17.797 [2024-10-15 13:52:31.540150] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:17.797 [2024-10-15 13:52:31.540275] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:17.797 [2024-10-15 13:52:31.540288] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:17.797 [2024-10-15 13:52:31.540299] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:17.797 [2024-10-15 13:52:31.540309] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:17.797 [2024-10-15 13:52:31.540318] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:17.797 [2024-10-15 13:52:31.540326] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:17.797 [2024-10-15 13:52:31.540336] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:17.797 [2024-10-15 13:52:31.540344] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:17.797 [2024-10-15 13:52:31.540352] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:17.797 [2024-10-15 13:52:31.540359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.797 [2024-10-15 13:52:31.540366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:17.797 [2024-10-15 13:52:31.540374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:18:17.797 [2024-10-15 13:52:31.540381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.797 [2024-10-15 13:52:31.540469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.797 [2024-10-15 13:52:31.540483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:17.797 [2024-10-15 13:52:31.540491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:17.797 [2024-10-15 13:52:31.540501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.797 [2024-10-15 13:52:31.540601] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:17.797 [2024-10-15 13:52:31.540611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:17.797 [2024-10-15 13:52:31.540619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:17.797 [2024-10-15 13:52:31.540627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.797 [2024-10-15 13:52:31.540635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:17.797 [2024-10-15 13:52:31.540643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:17.797 [2024-10-15 13:52:31.540650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:17.797 [2024-10-15 13:52:31.540657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:17.797 [2024-10-15 13:52:31.540664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:17.798 [2024-10-15 13:52:31.540672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:17.798 [2024-10-15 13:52:31.540680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:17.798 [2024-10-15 13:52:31.540686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:17.798 [2024-10-15 13:52:31.540693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:17.798 [2024-10-15 13:52:31.540708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:17.798 [2024-10-15 13:52:31.540715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:17.798 [2024-10-15 13:52:31.540723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.798 [2024-10-15 13:52:31.540730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:17.798 [2024-10-15 13:52:31.540737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:17.798 [2024-10-15 13:52:31.540744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.798 [2024-10-15 13:52:31.540751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:17.798 [2024-10-15 13:52:31.540757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:17.798 [2024-10-15 13:52:31.540764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.798 [2024-10-15 13:52:31.540770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:17.798 [2024-10-15 13:52:31.540777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:17.798 [2024-10-15 13:52:31.540783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.798 [2024-10-15 13:52:31.540790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:17.798 [2024-10-15 13:52:31.540798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:17.798 [2024-10-15 13:52:31.540805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.798 [2024-10-15 13:52:31.540812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:17.798 [2024-10-15 13:52:31.540818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:17.798 [2024-10-15 13:52:31.540824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.798 [2024-10-15 13:52:31.540831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:17.798 [2024-10-15 13:52:31.540837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:17.798 [2024-10-15 13:52:31.540844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:17.798 [2024-10-15 13:52:31.540850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:17.798 [2024-10-15 13:52:31.540856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:17.798 [2024-10-15 13:52:31.540863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:17.798 [2024-10-15 13:52:31.540869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:17.798 [2024-10-15 13:52:31.540876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:17.798 [2024-10-15 13:52:31.540883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.798 [2024-10-15 13:52:31.540889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:17.798 [2024-10-15 13:52:31.540895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:17.798 [2024-10-15 13:52:31.540903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.798 [2024-10-15 13:52:31.540910] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:17.798 [2024-10-15 13:52:31.540917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:17.798 [2024-10-15 13:52:31.540926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:17.798 [2024-10-15 13:52:31.540934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.798 [2024-10-15 13:52:31.540941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:17.798 [2024-10-15 13:52:31.540948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:17.798 [2024-10-15 13:52:31.540955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:17.798 [2024-10-15 13:52:31.540962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:17.798 [2024-10-15 13:52:31.540968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:17.798 [2024-10-15 13:52:31.540975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:17.798 [2024-10-15 13:52:31.540983] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:17.798 [2024-10-15 13:52:31.540994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:17.798 [2024-10-15 13:52:31.541002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:17.798 [2024-10-15 13:52:31.541009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:17.798 [2024-10-15 13:52:31.541017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:17.798 [2024-10-15 13:52:31.541024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:17.798 [2024-10-15 13:52:31.541032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:17.798 [2024-10-15 13:52:31.541039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:17.798 [2024-10-15 13:52:31.541046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:17.798 [2024-10-15 13:52:31.541053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:17.798 [2024-10-15 13:52:31.541060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:17.798 [2024-10-15 13:52:31.541067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:17.798 [2024-10-15 13:52:31.541073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:17.798 [2024-10-15 13:52:31.541080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:17.798 [2024-10-15 13:52:31.541087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:17.798 [2024-10-15 13:52:31.541095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:17.798 [2024-10-15 13:52:31.541102] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:17.798 [2024-10-15 13:52:31.541110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:17.798 [2024-10-15 13:52:31.541119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:17.798 [2024-10-15 13:52:31.541126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:17.798 [2024-10-15 13:52:31.541133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:17.798 [2024-10-15 13:52:31.541140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:17.798 [2024-10-15 13:52:31.541147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.798 [2024-10-15 13:52:31.541155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:17.798 [2024-10-15 13:52:31.541164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:18:17.798 [2024-10-15 13:52:31.541174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.798 [2024-10-15 13:52:31.570568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.798 [2024-10-15 13:52:31.570715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.798 [2024-10-15 13:52:31.570775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.860 ms 00:18:17.798 [2024-10-15 13:52:31.570799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.798 [2024-10-15 13:52:31.570953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.798 [2024-10-15 13:52:31.571049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:17.798 [2024-10-15 13:52:31.571074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:17.798 [2024-10-15 13:52:31.571098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.057 [2024-10-15 13:52:31.613913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.057 [2024-10-15 13:52:31.614096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.057 [2024-10-15 13:52:31.614172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.773 ms 00:18:18.057 [2024-10-15 13:52:31.614197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.057 [2024-10-15 13:52:31.614349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.057 [2024-10-15 13:52:31.614426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.057 [2024-10-15 13:52:31.614451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:18.057 [2024-10-15 13:52:31.614471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.057 [2024-10-15 13:52:31.614961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.057 [2024-10-15 13:52:31.615049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.057 [2024-10-15 13:52:31.615151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:18:18.057 [2024-10-15 13:52:31.615181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.057 [2024-10-15 13:52:31.615350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.057 [2024-10-15 13:52:31.615377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.057 [2024-10-15 13:52:31.615429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:18:18.057 [2024-10-15 13:52:31.615450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.057 [2024-10-15 13:52:31.630140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.057 [2024-10-15 13:52:31.630262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.057 [2024-10-15 13:52:31.630312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.666 ms 00:18:18.057 [2024-10-15 13:52:31.630351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.057 [2024-10-15 13:52:31.643251] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:18.057 [2024-10-15 13:52:31.643366] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:18.057 [2024-10-15 13:52:31.643423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.057 [2024-10-15 13:52:31.643443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:18.057 [2024-10-15 13:52:31.643463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.940 ms 00:18:18.057 [2024-10-15 13:52:31.643481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.057 [2024-10-15 13:52:31.667911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.057 [2024-10-15 13:52:31.668064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:18.057 [2024-10-15 13:52:31.668184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.307 ms 00:18:18.057 [2024-10-15 13:52:31.668215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.057 [2024-10-15 13:52:31.680018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.057 [2024-10-15 13:52:31.680113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:18.057 [2024-10-15 13:52:31.680158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.695 ms 00:18:18.057 [2024-10-15 13:52:31.680179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.057 [2024-10-15 13:52:31.691667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.057 [2024-10-15 13:52:31.691763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:18.057 [2024-10-15 13:52:31.691808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.402 ms 00:18:18.057 [2024-10-15 13:52:31.691829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.057 [2024-10-15 13:52:31.692516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.057 [2024-10-15 13:52:31.692534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:18.057 [2024-10-15 13:52:31.692546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:18:18.057 [2024-10-15 13:52:31.692554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.057 [2024-10-15 13:52:31.751110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.057 [2024-10-15 13:52:31.751318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:18.057 [2024-10-15 13:52:31.751344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.530 ms 00:18:18.057 [2024-10-15 13:52:31.751353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.057 [2024-10-15 13:52:31.762164] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:18.057 [2024-10-15 13:52:31.778725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.057 [2024-10-15 13:52:31.778860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:18.057 [2024-10-15 13:52:31.778877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.265 ms 00:18:18.057 [2024-10-15 13:52:31.778887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.057 [2024-10-15 13:52:31.778984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.057 [2024-10-15 13:52:31.778996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:18.057 [2024-10-15 13:52:31.779007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:18.058 [2024-10-15 13:52:31.779015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.058 [2024-10-15 13:52:31.779069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.058 [2024-10-15 13:52:31.779079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:18.058 [2024-10-15 13:52:31.779087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:18.058 [2024-10-15 13:52:31.779095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.058 [2024-10-15 13:52:31.779117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.058 [2024-10-15 13:52:31.779125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:18.058 [2024-10-15 13:52:31.779137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:18.058 [2024-10-15 13:52:31.779147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.058 [2024-10-15 13:52:31.779183] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:18.058 [2024-10-15 13:52:31.779193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.058 [2024-10-15 13:52:31.779201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:18.058 [2024-10-15 13:52:31.779209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:18.058 [2024-10-15 13:52:31.779217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.058 [2024-10-15 13:52:31.803131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.058 [2024-10-15 13:52:31.803164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:18.058 [2024-10-15 13:52:31.803179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.877 ms 00:18:18.058 [2024-10-15 13:52:31.803187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.058 [2024-10-15 13:52:31.803293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.058 [2024-10-15 13:52:31.803306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:18.058 [2024-10-15 13:52:31.803315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:18.058 [2024-10-15 13:52:31.803323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.058 [2024-10-15 13:52:31.804681] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:18.058 [2024-10-15 13:52:31.807711] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 294.108 ms, result 0 00:18:18.058 [2024-10-15 13:52:31.808431] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:18.058 [2024-10-15 13:52:31.821198] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:19.431  [2024-10-15T13:52:34.151Z] Copying: 22/256 [MB] (22 MBps) [2024-10-15T13:52:35.085Z] Copying: 49/256 [MB] (27 MBps) [2024-10-15T13:52:36.017Z] Copying: 76/256 [MB] (27 MBps) [2024-10-15T13:52:36.951Z] Copying: 108/256 [MB] (31 MBps) [2024-10-15T13:52:37.884Z] Copying: 139/256 [MB] (31 MBps) [2024-10-15T13:52:39.255Z] Copying: 167/256 [MB] (27 MBps) [2024-10-15T13:52:40.189Z] Copying: 200/256 [MB] (33 MBps) [2024-10-15T13:52:41.123Z] Copying: 225/256 [MB] (25 MBps) [2024-10-15T13:52:41.123Z] Copying: 249/256 [MB] (23 MBps) [2024-10-15T13:52:41.123Z] Copying: 256/256 [MB] (average 27 MBps)[2024-10-15 13:52:41.017279] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:27.335 [2024-10-15 13:52:41.026846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.335 [2024-10-15 13:52:41.026886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:27.335 [2024-10-15 13:52:41.026902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:27.335 [2024-10-15 13:52:41.026910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.335 [2024-10-15 13:52:41.026944] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:27.335 [2024-10-15 13:52:41.029788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.335 [2024-10-15 13:52:41.029816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:27.335 [2024-10-15 13:52:41.029832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.831 ms 00:18:27.335 [2024-10-15 13:52:41.029841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.335 [2024-10-15 13:52:41.031459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.335 [2024-10-15 13:52:41.031488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:27.335 [2024-10-15 13:52:41.031497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.596 ms 00:18:27.335 [2024-10-15 13:52:41.031505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.335 [2024-10-15 13:52:41.038301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.335 [2024-10-15 13:52:41.038328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:27.335 [2024-10-15 13:52:41.038338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.779 ms 00:18:27.335 [2024-10-15 13:52:41.038345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.335 [2024-10-15 13:52:41.045441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.335 [2024-10-15 13:52:41.045467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:27.335 [2024-10-15 13:52:41.045476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.052 ms 00:18:27.335 [2024-10-15 13:52:41.045483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.335 [2024-10-15 13:52:41.068870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.335 [2024-10-15 13:52:41.068994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:27.335 [2024-10-15 13:52:41.069010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.343 ms 00:18:27.335 [2024-10-15 13:52:41.069019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.335 [2024-10-15 13:52:41.083577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.335 [2024-10-15 13:52:41.083726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:27.335 [2024-10-15 13:52:41.083742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.527 ms 00:18:27.335 [2024-10-15 13:52:41.083750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.335 [2024-10-15 13:52:41.083885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.335 [2024-10-15 13:52:41.083899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:27.335 [2024-10-15 13:52:41.083908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:27.335 [2024-10-15 13:52:41.083916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.335 [2024-10-15 13:52:41.106146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.335 [2024-10-15 13:52:41.106176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:27.335 [2024-10-15 13:52:41.106186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.214 ms 00:18:27.335 [2024-10-15 13:52:41.106193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.594 [2024-10-15 13:52:41.128781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.594 [2024-10-15 13:52:41.128809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:27.594 [2024-10-15 13:52:41.128819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.541 ms 00:18:27.594 [2024-10-15 13:52:41.128826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.594 [2024-10-15 13:52:41.151208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.594 [2024-10-15 13:52:41.151243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:27.594 [2024-10-15 13:52:41.151253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.350 ms 00:18:27.594 [2024-10-15 13:52:41.151260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.594 [2024-10-15 13:52:41.173736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.594 [2024-10-15 13:52:41.173851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:27.594 [2024-10-15 13:52:41.173866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.418 ms 00:18:27.594 [2024-10-15 13:52:41.173873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.594 [2024-10-15 13:52:41.173903] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:27.594 [2024-10-15 13:52:41.173918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.173929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.173937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.173945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.173954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.173962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.173970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.173977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.173985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.173992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:27.594 [2024-10-15 13:52:41.174332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:27.595 [2024-10-15 13:52:41.174706] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:27.595 [2024-10-15 13:52:41.174717] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f658c17-becd-47e5-8bb1-b864f80d9f09 00:18:27.595 [2024-10-15 13:52:41.174724] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:27.595 [2024-10-15 13:52:41.174732] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:27.595 [2024-10-15 13:52:41.174739] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:27.595 [2024-10-15 13:52:41.174746] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:27.595 [2024-10-15 13:52:41.174754] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:27.595 [2024-10-15 13:52:41.174761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:27.595 [2024-10-15 13:52:41.174769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:27.595 [2024-10-15 13:52:41.174775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:27.595 [2024-10-15 13:52:41.174782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:27.595 [2024-10-15 13:52:41.174790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.595 [2024-10-15 13:52:41.174797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:27.595 [2024-10-15 13:52:41.174805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:18:27.595 [2024-10-15 13:52:41.174813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.595 [2024-10-15 13:52:41.187628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.595 [2024-10-15 13:52:41.187657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:27.595 [2024-10-15 13:52:41.187667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.795 ms 00:18:27.595 [2024-10-15 13:52:41.187675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.595 [2024-10-15 13:52:41.188055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.595 [2024-10-15 13:52:41.188069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:27.595 [2024-10-15 13:52:41.188078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:18:27.595 [2024-10-15 13:52:41.188091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.595 [2024-10-15 13:52:41.224384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.595 [2024-10-15 13:52:41.224501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:27.595 [2024-10-15 13:52:41.224517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.595 [2024-10-15 13:52:41.224525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.595 [2024-10-15 13:52:41.224603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.595 [2024-10-15 13:52:41.224613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:27.595 [2024-10-15 13:52:41.224621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.595 [2024-10-15 13:52:41.224631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.595 [2024-10-15 13:52:41.224673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.595 [2024-10-15 13:52:41.224682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:27.595 [2024-10-15 13:52:41.224690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.595 [2024-10-15 13:52:41.224698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.595 [2024-10-15 13:52:41.224716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.595 [2024-10-15 13:52:41.224724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:27.595 [2024-10-15 13:52:41.224731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.595 [2024-10-15 13:52:41.224739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.595 [2024-10-15 13:52:41.305917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.595 [2024-10-15 13:52:41.305969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:27.595 [2024-10-15 13:52:41.305980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.595 [2024-10-15 13:52:41.305988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.595 [2024-10-15 13:52:41.371697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.596 [2024-10-15 13:52:41.371886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:27.596 [2024-10-15 13:52:41.371902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.596 [2024-10-15 13:52:41.371917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.596 [2024-10-15 13:52:41.371987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.596 [2024-10-15 13:52:41.371997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:27.596 [2024-10-15 13:52:41.372006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.596 [2024-10-15 13:52:41.372014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.596 [2024-10-15 13:52:41.372044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.596 [2024-10-15 13:52:41.372053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:27.596 [2024-10-15 13:52:41.372061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.596 [2024-10-15 13:52:41.372069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.596 [2024-10-15 13:52:41.372170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.596 [2024-10-15 13:52:41.372181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:27.596 [2024-10-15 13:52:41.372190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.596 [2024-10-15 13:52:41.372197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.596 [2024-10-15 13:52:41.372250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.596 [2024-10-15 13:52:41.372260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:27.596 [2024-10-15 13:52:41.372268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.596 [2024-10-15 13:52:41.372276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.596 [2024-10-15 13:52:41.372321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.596 [2024-10-15 13:52:41.372331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:27.596 [2024-10-15 13:52:41.372339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.596 [2024-10-15 13:52:41.372346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.596 [2024-10-15 13:52:41.372393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.596 [2024-10-15 13:52:41.372403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:27.596 [2024-10-15 13:52:41.372412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.596 [2024-10-15 13:52:41.372420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.596 [2024-10-15 13:52:41.372566] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.696 ms, result 0 00:18:28.968 00:18:28.968 00:18:28.968 13:52:42 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73915 00:18:28.968 13:52:42 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73915 00:18:28.968 13:52:42 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:28.968 13:52:42 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 73915 ']' 00:18:28.968 13:52:42 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.968 13:52:42 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:28.968 13:52:42 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.968 13:52:42 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:28.968 13:52:42 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:28.968 [2024-10-15 13:52:42.460460] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:18:28.968 [2024-10-15 13:52:42.460759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73915 ] 00:18:28.968 [2024-10-15 13:52:42.610178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.968 [2024-10-15 13:52:42.725415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.903 13:52:43 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:29.903 13:52:43 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:29.903 13:52:43 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:29.903 [2024-10-15 13:52:43.563741] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:29.903 [2024-10-15 13:52:43.563799] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:30.161 [2024-10-15 13:52:43.738678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.161 [2024-10-15 13:52:43.738726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:30.161 [2024-10-15 13:52:43.738742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:30.161 [2024-10-15 13:52:43.738751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.161 [2024-10-15 13:52:43.741535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.161 [2024-10-15 13:52:43.741569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:30.161 [2024-10-15 13:52:43.741581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.764 ms 00:18:30.161 [2024-10-15 13:52:43.741588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.161 [2024-10-15 13:52:43.741680] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:30.161 [2024-10-15 13:52:43.742364] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:30.161 [2024-10-15 13:52:43.742386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.161 [2024-10-15 13:52:43.742394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:30.161 [2024-10-15 13:52:43.742404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:18:30.161 [2024-10-15 13:52:43.742411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.161 [2024-10-15 13:52:43.744003] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:30.161 [2024-10-15 13:52:43.756843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.161 [2024-10-15 13:52:43.756884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:30.161 [2024-10-15 13:52:43.756897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.847 ms 00:18:30.161 [2024-10-15 13:52:43.756906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.161 [2024-10-15 13:52:43.756988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.161 [2024-10-15 13:52:43.757000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:30.161 [2024-10-15 13:52:43.757009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:30.161 [2024-10-15 13:52:43.757019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.161 [2024-10-15 13:52:43.763347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.161 [2024-10-15 13:52:43.763380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:30.161 [2024-10-15 13:52:43.763390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.282 ms 00:18:30.161 [2024-10-15 13:52:43.763400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.161 [2024-10-15 13:52:43.763494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.161 [2024-10-15 13:52:43.763506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:30.161 [2024-10-15 13:52:43.763514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:30.161 [2024-10-15 13:52:43.763523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.161 [2024-10-15 13:52:43.763549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.161 [2024-10-15 13:52:43.763563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:30.161 [2024-10-15 13:52:43.763571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:30.161 [2024-10-15 13:52:43.763580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.161 [2024-10-15 13:52:43.763605] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:30.161 [2024-10-15 13:52:43.767145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.161 [2024-10-15 13:52:43.767286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:30.161 [2024-10-15 13:52:43.767305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.545 ms 00:18:30.161 [2024-10-15 13:52:43.767313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.161 [2024-10-15 13:52:43.767367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.161 [2024-10-15 13:52:43.767376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:30.161 [2024-10-15 13:52:43.767386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:30.161 [2024-10-15 13:52:43.767393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.161 [2024-10-15 13:52:43.767415] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:30.161 [2024-10-15 13:52:43.767436] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:30.161 [2024-10-15 13:52:43.767479] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:30.161 [2024-10-15 13:52:43.767495] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:30.161 [2024-10-15 13:52:43.767604] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:30.161 [2024-10-15 13:52:43.767615] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:30.161 [2024-10-15 13:52:43.767627] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:30.161 [2024-10-15 13:52:43.767637] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:30.161 [2024-10-15 13:52:43.767650] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:30.161 [2024-10-15 13:52:43.767659] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:30.161 [2024-10-15 13:52:43.767668] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:30.161 [2024-10-15 13:52:43.767676] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:30.161 [2024-10-15 13:52:43.767686] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:30.161 [2024-10-15 13:52:43.767694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.161 [2024-10-15 13:52:43.767714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:30.161 [2024-10-15 13:52:43.767722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:18:30.161 [2024-10-15 13:52:43.767731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.161 [2024-10-15 13:52:43.767828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.161 [2024-10-15 13:52:43.767839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:30.161 [2024-10-15 13:52:43.767848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:30.161 [2024-10-15 13:52:43.767857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.161 [2024-10-15 13:52:43.767964] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:30.161 [2024-10-15 13:52:43.767976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:30.161 [2024-10-15 13:52:43.767984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:30.161 [2024-10-15 13:52:43.767993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.161 [2024-10-15 13:52:43.768000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:30.161 [2024-10-15 13:52:43.768008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:30.161 [2024-10-15 13:52:43.768016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:30.161 [2024-10-15 13:52:43.768028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:30.161 [2024-10-15 13:52:43.768036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:30.161 [2024-10-15 13:52:43.768044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:30.161 [2024-10-15 13:52:43.768051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:30.161 [2024-10-15 13:52:43.768060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:30.161 [2024-10-15 13:52:43.768066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:30.161 [2024-10-15 13:52:43.768074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:30.161 [2024-10-15 13:52:43.768081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:30.161 [2024-10-15 13:52:43.768089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.161 [2024-10-15 13:52:43.768095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:30.161 [2024-10-15 13:52:43.768103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:30.161 [2024-10-15 13:52:43.768109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.161 [2024-10-15 13:52:43.768118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:30.161 [2024-10-15 13:52:43.768130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:30.161 [2024-10-15 13:52:43.768139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.161 [2024-10-15 13:52:43.768146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:30.161 [2024-10-15 13:52:43.768156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:30.161 [2024-10-15 13:52:43.768162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.161 [2024-10-15 13:52:43.768170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:30.161 [2024-10-15 13:52:43.768177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:30.161 [2024-10-15 13:52:43.768185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.161 [2024-10-15 13:52:43.768192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:30.161 [2024-10-15 13:52:43.768200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:30.161 [2024-10-15 13:52:43.768206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.161 [2024-10-15 13:52:43.768216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:30.161 [2024-10-15 13:52:43.768234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:30.161 [2024-10-15 13:52:43.768243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:30.161 [2024-10-15 13:52:43.768250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:30.161 [2024-10-15 13:52:43.768258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:30.161 [2024-10-15 13:52:43.768264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:30.161 [2024-10-15 13:52:43.768273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:30.161 [2024-10-15 13:52:43.768280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:30.161 [2024-10-15 13:52:43.768290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.161 [2024-10-15 13:52:43.768297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:30.161 [2024-10-15 13:52:43.768306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:30.161 [2024-10-15 13:52:43.768312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.161 [2024-10-15 13:52:43.768321] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:30.161 [2024-10-15 13:52:43.768328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:30.161 [2024-10-15 13:52:43.768337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:30.161 [2024-10-15 13:52:43.768346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.161 [2024-10-15 13:52:43.768355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:30.161 [2024-10-15 13:52:43.768362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:30.161 [2024-10-15 13:52:43.768371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:30.161 [2024-10-15 13:52:43.768377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:30.162 [2024-10-15 13:52:43.768385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:30.162 [2024-10-15 13:52:43.768392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:30.162 [2024-10-15 13:52:43.768403] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:30.162 [2024-10-15 13:52:43.768412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:30.162 [2024-10-15 13:52:43.768425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:30.162 [2024-10-15 13:52:43.768433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:30.162 [2024-10-15 13:52:43.768442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:30.162 [2024-10-15 13:52:43.768449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:30.162 [2024-10-15 13:52:43.768458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:30.162 [2024-10-15 13:52:43.768465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:30.162 [2024-10-15 13:52:43.768474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:30.162 [2024-10-15 13:52:43.768481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:30.162 [2024-10-15 13:52:43.768489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:30.162 [2024-10-15 13:52:43.768496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:30.162 [2024-10-15 13:52:43.768505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:30.162 [2024-10-15 13:52:43.768512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:30.162 [2024-10-15 13:52:43.768520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:30.162 [2024-10-15 13:52:43.768527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:30.162 [2024-10-15 13:52:43.768536] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:30.162 [2024-10-15 13:52:43.768544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:30.162 [2024-10-15 13:52:43.768554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:30.162 [2024-10-15 13:52:43.768561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:30.162 [2024-10-15 13:52:43.768570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:30.162 [2024-10-15 13:52:43.768578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:30.162 [2024-10-15 13:52:43.768586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.162 [2024-10-15 13:52:43.768594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:30.162 [2024-10-15 13:52:43.768603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:18:30.162 [2024-10-15 13:52:43.768611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.162 [2024-10-15 13:52:43.797341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.162 [2024-10-15 13:52:43.797377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:30.162 [2024-10-15 13:52:43.797391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.671 ms 00:18:30.162 [2024-10-15 13:52:43.797400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.162 [2024-10-15 13:52:43.797517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.162 [2024-10-15 13:52:43.797530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:30.162 [2024-10-15 13:52:43.797540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:30.162 [2024-10-15 13:52:43.797549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.162 [2024-10-15 13:52:43.830046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.162 [2024-10-15 13:52:43.830076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:30.162 [2024-10-15 13:52:43.830089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.472 ms 00:18:30.162 [2024-10-15 13:52:43.830100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.162 [2024-10-15 13:52:43.830170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.162 [2024-10-15 13:52:43.830180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:30.162 [2024-10-15 13:52:43.830190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:30.162 [2024-10-15 13:52:43.830197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.162 [2024-10-15 13:52:43.830629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.162 [2024-10-15 13:52:43.830646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:30.162 [2024-10-15 13:52:43.830657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:18:30.162 [2024-10-15 13:52:43.830664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.162 [2024-10-15 13:52:43.830798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.162 [2024-10-15 13:52:43.830806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:30.162 [2024-10-15 13:52:43.830815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:18:30.162 [2024-10-15 13:52:43.830822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.162 [2024-10-15 13:52:43.846453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.162 [2024-10-15 13:52:43.846481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:30.162 [2024-10-15 13:52:43.846493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.607 ms 00:18:30.162 [2024-10-15 13:52:43.846501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.162 [2024-10-15 13:52:43.859306] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:30.162 [2024-10-15 13:52:43.859337] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:30.162 [2024-10-15 13:52:43.859350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.162 [2024-10-15 13:52:43.859358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:30.162 [2024-10-15 13:52:43.859368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.753 ms 00:18:30.162 [2024-10-15 13:52:43.859375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.162 [2024-10-15 13:52:43.883980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.162 [2024-10-15 13:52:43.884012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:30.162 [2024-10-15 13:52:43.884024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.538 ms 00:18:30.162 [2024-10-15 13:52:43.884032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.162 [2024-10-15 13:52:43.895456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.162 [2024-10-15 13:52:43.895484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:30.162 [2024-10-15 13:52:43.895497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.357 ms 00:18:30.162 [2024-10-15 13:52:43.895505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.162 [2024-10-15 13:52:43.906560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.162 [2024-10-15 13:52:43.906587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:30.162 [2024-10-15 13:52:43.906599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.993 ms 00:18:30.162 [2024-10-15 13:52:43.906606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.162 [2024-10-15 13:52:43.907230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.162 [2024-10-15 13:52:43.907247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:30.162 [2024-10-15 13:52:43.907258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:18:30.162 [2024-10-15 13:52:43.907267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.420 [2024-10-15 13:52:43.975024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.420 [2024-10-15 13:52:43.975086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:30.420 [2024-10-15 13:52:43.975104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.729 ms 00:18:30.420 [2024-10-15 13:52:43.975113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.420 [2024-10-15 13:52:43.985730] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:30.420 [2024-10-15 13:52:44.001915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.420 [2024-10-15 13:52:44.001960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:30.420 [2024-10-15 13:52:44.001973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.709 ms 00:18:30.420 [2024-10-15 13:52:44.001984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.420 [2024-10-15 13:52:44.002078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.420 [2024-10-15 13:52:44.002091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:30.420 [2024-10-15 13:52:44.002100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:30.420 [2024-10-15 13:52:44.002109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.420 [2024-10-15 13:52:44.002164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.420 [2024-10-15 13:52:44.002175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:30.420 [2024-10-15 13:52:44.002184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:30.420 [2024-10-15 13:52:44.002193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.420 [2024-10-15 13:52:44.002239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.420 [2024-10-15 13:52:44.002253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:30.420 [2024-10-15 13:52:44.002261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:30.420 [2024-10-15 13:52:44.002273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.420 [2024-10-15 13:52:44.002310] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:30.420 [2024-10-15 13:52:44.002323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.420 [2024-10-15 13:52:44.002330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:30.420 [2024-10-15 13:52:44.002340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:30.420 [2024-10-15 13:52:44.002350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.420 [2024-10-15 13:52:44.025888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.420 [2024-10-15 13:52:44.025922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:30.420 [2024-10-15 13:52:44.025936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.514 ms 00:18:30.420 [2024-10-15 13:52:44.025944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.420 [2024-10-15 13:52:44.026034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.420 [2024-10-15 13:52:44.026044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:30.420 [2024-10-15 13:52:44.026055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:30.420 [2024-10-15 13:52:44.026062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.420 [2024-10-15 13:52:44.026956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:30.420 [2024-10-15 13:52:44.029965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 287.957 ms, result 0 00:18:30.420 [2024-10-15 13:52:44.030893] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:30.420 Some configs were skipped because the RPC state that can call them passed over. 00:18:30.420 13:52:44 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:30.678 [2024-10-15 13:52:44.258209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.678 [2024-10-15 13:52:44.258378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:30.678 [2024-10-15 13:52:44.258433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.471 ms 00:18:30.678 [2024-10-15 13:52:44.258459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.678 [2024-10-15 13:52:44.258510] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.777 ms, result 0 00:18:30.678 true 00:18:30.678 13:52:44 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:30.678 [2024-10-15 13:52:44.461271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.678 [2024-10-15 13:52:44.461395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:30.678 [2024-10-15 13:52:44.461449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.339 ms 00:18:30.678 [2024-10-15 13:52:44.461471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.678 [2024-10-15 13:52:44.461524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.592 ms, result 0 00:18:30.678 true 00:18:30.936 13:52:44 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73915 00:18:30.936 13:52:44 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73915 ']' 00:18:30.936 13:52:44 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73915 00:18:30.936 13:52:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:30.936 13:52:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:30.936 13:52:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73915 00:18:30.936 killing process with pid 73915 00:18:30.936 13:52:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:30.936 13:52:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:30.936 13:52:44 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73915' 00:18:30.936 13:52:44 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 73915 00:18:30.936 13:52:44 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 73915 00:18:31.503 [2024-10-15 13:52:45.124580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.503 [2024-10-15 13:52:45.124648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:31.503 [2024-10-15 13:52:45.124661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:31.503 [2024-10-15 13:52:45.124670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.503 [2024-10-15 13:52:45.124689] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:31.503 [2024-10-15 13:52:45.126846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.503 [2024-10-15 13:52:45.126872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:31.503 [2024-10-15 13:52:45.126885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.143 ms 00:18:31.503 [2024-10-15 13:52:45.126892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.503 [2024-10-15 13:52:45.127133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.503 [2024-10-15 13:52:45.127142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:31.503 [2024-10-15 13:52:45.127150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:18:31.503 [2024-10-15 13:52:45.127156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.503 [2024-10-15 13:52:45.130497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.503 [2024-10-15 13:52:45.130524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:31.503 [2024-10-15 13:52:45.130533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:18:31.503 [2024-10-15 13:52:45.130539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.503 [2024-10-15 13:52:45.135764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.503 [2024-10-15 13:52:45.135880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:31.503 [2024-10-15 13:52:45.135898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.193 ms 00:18:31.503 [2024-10-15 13:52:45.135904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.503 [2024-10-15 13:52:45.143551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.503 [2024-10-15 13:52:45.143576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:31.504 [2024-10-15 13:52:45.143587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.594 ms 00:18:31.504 [2024-10-15 13:52:45.143599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.504 [2024-10-15 13:52:45.150112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.504 [2024-10-15 13:52:45.150138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:31.504 [2024-10-15 13:52:45.150149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.481 ms 00:18:31.504 [2024-10-15 13:52:45.150157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.504 [2024-10-15 13:52:45.150280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.504 [2024-10-15 13:52:45.150288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:31.504 [2024-10-15 13:52:45.150297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:31.504 [2024-10-15 13:52:45.150303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.504 [2024-10-15 13:52:45.158003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.504 [2024-10-15 13:52:45.158101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:31.504 [2024-10-15 13:52:45.158116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.683 ms 00:18:31.504 [2024-10-15 13:52:45.158121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.504 [2024-10-15 13:52:45.165370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.504 [2024-10-15 13:52:45.165394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:31.504 [2024-10-15 13:52:45.165405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.220 ms 00:18:31.504 [2024-10-15 13:52:45.165410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.504 [2024-10-15 13:52:45.172496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.504 [2024-10-15 13:52:45.172584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:31.504 [2024-10-15 13:52:45.172599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.047 ms 00:18:31.504 [2024-10-15 13:52:45.172605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.504 [2024-10-15 13:52:45.180349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.504 [2024-10-15 13:52:45.180373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:31.504 [2024-10-15 13:52:45.180381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.692 ms 00:18:31.504 [2024-10-15 13:52:45.180387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.504 [2024-10-15 13:52:45.180415] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:31.504 [2024-10-15 13:52:45.180428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:31.504 [2024-10-15 13:52:45.180914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.180919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.180926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.180932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.180939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.180945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.180951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.180957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.180964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.180970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.180977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.180983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.180991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.180997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:31.505 [2024-10-15 13:52:45.181093] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:31.505 [2024-10-15 13:52:45.181102] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f658c17-becd-47e5-8bb1-b864f80d9f09 00:18:31.505 [2024-10-15 13:52:45.181114] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:31.505 [2024-10-15 13:52:45.181124] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:31.505 [2024-10-15 13:52:45.181133] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:31.505 [2024-10-15 13:52:45.181140] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:31.505 [2024-10-15 13:52:45.181147] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:31.505 [2024-10-15 13:52:45.181154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:31.505 [2024-10-15 13:52:45.181160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:31.505 [2024-10-15 13:52:45.181166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:31.505 [2024-10-15 13:52:45.181172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:31.505 [2024-10-15 13:52:45.181179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.505 [2024-10-15 13:52:45.181185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:31.505 [2024-10-15 13:52:45.181193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:18:31.505 [2024-10-15 13:52:45.181199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.505 [2024-10-15 13:52:45.191261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.505 [2024-10-15 13:52:45.191354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:31.505 [2024-10-15 13:52:45.191371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.046 ms 00:18:31.505 [2024-10-15 13:52:45.191378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.505 [2024-10-15 13:52:45.191692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.505 [2024-10-15 13:52:45.191706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:31.505 [2024-10-15 13:52:45.191715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:18:31.505 [2024-10-15 13:52:45.191722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.505 [2024-10-15 13:52:45.228231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.505 [2024-10-15 13:52:45.228266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:31.505 [2024-10-15 13:52:45.228277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.505 [2024-10-15 13:52:45.228284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.505 [2024-10-15 13:52:45.228396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.505 [2024-10-15 13:52:45.228405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:31.505 [2024-10-15 13:52:45.228413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.505 [2024-10-15 13:52:45.228420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.505 [2024-10-15 13:52:45.228463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.505 [2024-10-15 13:52:45.228471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:31.505 [2024-10-15 13:52:45.228481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.505 [2024-10-15 13:52:45.228487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.505 [2024-10-15 13:52:45.228505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.505 [2024-10-15 13:52:45.228511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:31.505 [2024-10-15 13:52:45.228519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.505 [2024-10-15 13:52:45.228524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.763 [2024-10-15 13:52:45.291868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.763 [2024-10-15 13:52:45.291915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:31.763 [2024-10-15 13:52:45.291929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.763 [2024-10-15 13:52:45.291935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.763 [2024-10-15 13:52:45.343193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.763 [2024-10-15 13:52:45.343372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:31.763 [2024-10-15 13:52:45.343391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.763 [2024-10-15 13:52:45.343399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.763 [2024-10-15 13:52:45.344481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.763 [2024-10-15 13:52:45.344507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:31.763 [2024-10-15 13:52:45.344519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.763 [2024-10-15 13:52:45.344526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.763 [2024-10-15 13:52:45.344558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.763 [2024-10-15 13:52:45.344565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:31.763 [2024-10-15 13:52:45.344572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.763 [2024-10-15 13:52:45.344579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.763 [2024-10-15 13:52:45.344667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.763 [2024-10-15 13:52:45.344675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:31.763 [2024-10-15 13:52:45.344685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.763 [2024-10-15 13:52:45.344691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.763 [2024-10-15 13:52:45.344722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.763 [2024-10-15 13:52:45.344730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:31.763 [2024-10-15 13:52:45.344738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.763 [2024-10-15 13:52:45.344744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.763 [2024-10-15 13:52:45.344781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.763 [2024-10-15 13:52:45.344788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:31.763 [2024-10-15 13:52:45.344799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.763 [2024-10-15 13:52:45.344806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.763 [2024-10-15 13:52:45.344846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.763 [2024-10-15 13:52:45.344854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:31.763 [2024-10-15 13:52:45.344862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.763 [2024-10-15 13:52:45.344868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.763 [2024-10-15 13:52:45.344993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 220.391 ms, result 0 00:18:32.353 13:52:46 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:32.353 13:52:46 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:32.353 [2024-10-15 13:52:46.076021] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:18:32.353 [2024-10-15 13:52:46.076143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73962 ] 00:18:32.619 [2024-10-15 13:52:46.226909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.619 [2024-10-15 13:52:46.328402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.877 [2024-10-15 13:52:46.584022] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:32.877 [2024-10-15 13:52:46.584265] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:33.136 [2024-10-15 13:52:46.738435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.136 [2024-10-15 13:52:46.738486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:33.136 [2024-10-15 13:52:46.738499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:33.136 [2024-10-15 13:52:46.738507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.136 [2024-10-15 13:52:46.741435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.136 [2024-10-15 13:52:46.741480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:33.136 [2024-10-15 13:52:46.741492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.907 ms 00:18:33.136 [2024-10-15 13:52:46.741500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.136 [2024-10-15 13:52:46.741613] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:33.136 [2024-10-15 13:52:46.742354] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:33.136 [2024-10-15 13:52:46.742382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.136 [2024-10-15 13:52:46.742390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:33.136 [2024-10-15 13:52:46.742400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:18:33.136 [2024-10-15 13:52:46.742408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.136 [2024-10-15 13:52:46.743575] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:33.136 [2024-10-15 13:52:46.756281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.136 [2024-10-15 13:52:46.756317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:33.136 [2024-10-15 13:52:46.756328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.707 ms 00:18:33.136 [2024-10-15 13:52:46.756340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.136 [2024-10-15 13:52:46.756431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.136 [2024-10-15 13:52:46.756444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:33.136 [2024-10-15 13:52:46.756453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:33.136 [2024-10-15 13:52:46.756460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.136 [2024-10-15 13:52:46.761494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.136 [2024-10-15 13:52:46.761529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:33.136 [2024-10-15 13:52:46.761538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.994 ms 00:18:33.136 [2024-10-15 13:52:46.761545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.136 [2024-10-15 13:52:46.761639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.136 [2024-10-15 13:52:46.761650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:33.136 [2024-10-15 13:52:46.761658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:33.136 [2024-10-15 13:52:46.761666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.136 [2024-10-15 13:52:46.761690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.136 [2024-10-15 13:52:46.761699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:33.136 [2024-10-15 13:52:46.761707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:33.136 [2024-10-15 13:52:46.761716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.136 [2024-10-15 13:52:46.761735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:33.136 [2024-10-15 13:52:46.765028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.136 [2024-10-15 13:52:46.765056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:33.136 [2024-10-15 13:52:46.765065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.297 ms 00:18:33.136 [2024-10-15 13:52:46.765073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.136 [2024-10-15 13:52:46.765108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.136 [2024-10-15 13:52:46.765117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:33.136 [2024-10-15 13:52:46.765125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:33.136 [2024-10-15 13:52:46.765132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.136 [2024-10-15 13:52:46.765150] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:33.136 [2024-10-15 13:52:46.765167] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:33.136 [2024-10-15 13:52:46.765202] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:33.136 [2024-10-15 13:52:46.765232] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:33.136 [2024-10-15 13:52:46.765335] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:33.136 [2024-10-15 13:52:46.765346] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:33.136 [2024-10-15 13:52:46.765357] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:33.136 [2024-10-15 13:52:46.765366] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:33.136 [2024-10-15 13:52:46.765375] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:33.136 [2024-10-15 13:52:46.765383] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:33.136 [2024-10-15 13:52:46.765394] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:33.136 [2024-10-15 13:52:46.765401] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:33.136 [2024-10-15 13:52:46.765407] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:33.136 [2024-10-15 13:52:46.765415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.136 [2024-10-15 13:52:46.765422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:33.136 [2024-10-15 13:52:46.765431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:18:33.136 [2024-10-15 13:52:46.765438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.136 [2024-10-15 13:52:46.765525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.136 [2024-10-15 13:52:46.765534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:33.136 [2024-10-15 13:52:46.765541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:33.136 [2024-10-15 13:52:46.765551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.136 [2024-10-15 13:52:46.765660] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:33.136 [2024-10-15 13:52:46.765675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:33.136 [2024-10-15 13:52:46.765683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:33.136 [2024-10-15 13:52:46.765691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.136 [2024-10-15 13:52:46.765698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:33.136 [2024-10-15 13:52:46.765706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:33.136 [2024-10-15 13:52:46.765712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:33.136 [2024-10-15 13:52:46.765719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:33.136 [2024-10-15 13:52:46.765726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:33.136 [2024-10-15 13:52:46.765732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:33.136 [2024-10-15 13:52:46.765739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:33.136 [2024-10-15 13:52:46.765745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:33.136 [2024-10-15 13:52:46.765753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:33.136 [2024-10-15 13:52:46.765766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:33.136 [2024-10-15 13:52:46.765772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:33.136 [2024-10-15 13:52:46.765779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.136 [2024-10-15 13:52:46.765785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:33.136 [2024-10-15 13:52:46.765792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:33.136 [2024-10-15 13:52:46.765798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.136 [2024-10-15 13:52:46.765805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:33.136 [2024-10-15 13:52:46.765811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:33.136 [2024-10-15 13:52:46.765817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.136 [2024-10-15 13:52:46.765823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:33.136 [2024-10-15 13:52:46.765829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:33.136 [2024-10-15 13:52:46.765835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.136 [2024-10-15 13:52:46.765842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:33.136 [2024-10-15 13:52:46.765849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:33.136 [2024-10-15 13:52:46.765855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.136 [2024-10-15 13:52:46.765861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:33.136 [2024-10-15 13:52:46.765868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:33.136 [2024-10-15 13:52:46.765874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.136 [2024-10-15 13:52:46.765880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:33.136 [2024-10-15 13:52:46.765886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:33.136 [2024-10-15 13:52:46.765892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:33.136 [2024-10-15 13:52:46.765898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:33.136 [2024-10-15 13:52:46.765904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:33.136 [2024-10-15 13:52:46.765910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:33.136 [2024-10-15 13:52:46.765917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:33.137 [2024-10-15 13:52:46.765923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:33.137 [2024-10-15 13:52:46.765929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.137 [2024-10-15 13:52:46.765935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:33.137 [2024-10-15 13:52:46.765942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:33.137 [2024-10-15 13:52:46.765948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.137 [2024-10-15 13:52:46.765954] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:33.137 [2024-10-15 13:52:46.765964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:33.137 [2024-10-15 13:52:46.765971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:33.137 [2024-10-15 13:52:46.765978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.137 [2024-10-15 13:52:46.765986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:33.137 [2024-10-15 13:52:46.765992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:33.137 [2024-10-15 13:52:46.765998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:33.137 [2024-10-15 13:52:46.766005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:33.137 [2024-10-15 13:52:46.766011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:33.137 [2024-10-15 13:52:46.766017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:33.137 [2024-10-15 13:52:46.766025] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:33.137 [2024-10-15 13:52:46.766036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:33.137 [2024-10-15 13:52:46.766044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:33.137 [2024-10-15 13:52:46.766051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:33.137 [2024-10-15 13:52:46.766058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:33.137 [2024-10-15 13:52:46.766064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:33.137 [2024-10-15 13:52:46.766072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:33.137 [2024-10-15 13:52:46.766079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:33.137 [2024-10-15 13:52:46.766086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:33.137 [2024-10-15 13:52:46.766093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:33.137 [2024-10-15 13:52:46.766100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:33.137 [2024-10-15 13:52:46.766107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:33.137 [2024-10-15 13:52:46.766114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:33.137 [2024-10-15 13:52:46.766121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:33.137 [2024-10-15 13:52:46.766128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:33.137 [2024-10-15 13:52:46.766135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:33.137 [2024-10-15 13:52:46.766141] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:33.137 [2024-10-15 13:52:46.766150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:33.137 [2024-10-15 13:52:46.766158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:33.137 [2024-10-15 13:52:46.766166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:33.137 [2024-10-15 13:52:46.766173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:33.137 [2024-10-15 13:52:46.766179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:33.137 [2024-10-15 13:52:46.766186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.137 [2024-10-15 13:52:46.766195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:33.137 [2024-10-15 13:52:46.766203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:18:33.137 [2024-10-15 13:52:46.766212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.137 [2024-10-15 13:52:46.792275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.137 [2024-10-15 13:52:46.792307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.137 [2024-10-15 13:52:46.792318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.000 ms 00:18:33.137 [2024-10-15 13:52:46.792325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.137 [2024-10-15 13:52:46.792441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.137 [2024-10-15 13:52:46.792451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:33.137 [2024-10-15 13:52:46.792459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:33.137 [2024-10-15 13:52:46.792469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.137 [2024-10-15 13:52:46.842642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.137 [2024-10-15 13:52:46.842857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.137 [2024-10-15 13:52:46.842878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.149 ms 00:18:33.137 [2024-10-15 13:52:46.842886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.137 [2024-10-15 13:52:46.843011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.137 [2024-10-15 13:52:46.843023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.137 [2024-10-15 13:52:46.843032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:33.137 [2024-10-15 13:52:46.843040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.137 [2024-10-15 13:52:46.843432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.137 [2024-10-15 13:52:46.843449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.137 [2024-10-15 13:52:46.843457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:18:33.137 [2024-10-15 13:52:46.843465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.137 [2024-10-15 13:52:46.843609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.137 [2024-10-15 13:52:46.843628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.137 [2024-10-15 13:52:46.843636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:18:33.137 [2024-10-15 13:52:46.843644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.137 [2024-10-15 13:52:46.857557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.137 [2024-10-15 13:52:46.857682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.137 [2024-10-15 13:52:46.857697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.893 ms 00:18:33.137 [2024-10-15 13:52:46.857706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.137 [2024-10-15 13:52:46.870450] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:33.137 [2024-10-15 13:52:46.870489] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:33.137 [2024-10-15 13:52:46.870501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.137 [2024-10-15 13:52:46.870509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:33.137 [2024-10-15 13:52:46.870519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.689 ms 00:18:33.137 [2024-10-15 13:52:46.870526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.137 [2024-10-15 13:52:46.896343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.137 [2024-10-15 13:52:46.896494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:33.137 [2024-10-15 13:52:46.896511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.729 ms 00:18:33.137 [2024-10-15 13:52:46.896519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.137 [2024-10-15 13:52:46.908434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.137 [2024-10-15 13:52:46.908467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:33.137 [2024-10-15 13:52:46.908478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.843 ms 00:18:33.137 [2024-10-15 13:52:46.908485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.137 [2024-10-15 13:52:46.920048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.137 [2024-10-15 13:52:46.920172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:33.137 [2024-10-15 13:52:46.920188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.495 ms 00:18:33.137 [2024-10-15 13:52:46.920195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.137 [2024-10-15 13:52:46.920820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.137 [2024-10-15 13:52:46.920836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:33.137 [2024-10-15 13:52:46.920846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:18:33.137 [2024-10-15 13:52:46.920854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.395 [2024-10-15 13:52:46.977131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.395 [2024-10-15 13:52:46.977184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:33.395 [2024-10-15 13:52:46.977197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.253 ms 00:18:33.395 [2024-10-15 13:52:46.977204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.395 [2024-10-15 13:52:46.987734] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:33.395 [2024-10-15 13:52:47.002004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.395 [2024-10-15 13:52:47.002041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:33.396 [2024-10-15 13:52:47.002055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.675 ms 00:18:33.396 [2024-10-15 13:52:47.002064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.396 [2024-10-15 13:52:47.002156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.396 [2024-10-15 13:52:47.002169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:33.396 [2024-10-15 13:52:47.002178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:33.396 [2024-10-15 13:52:47.002187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.396 [2024-10-15 13:52:47.002256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.396 [2024-10-15 13:52:47.002267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:33.396 [2024-10-15 13:52:47.002275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:33.396 [2024-10-15 13:52:47.002282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.396 [2024-10-15 13:52:47.002323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.396 [2024-10-15 13:52:47.002336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:33.396 [2024-10-15 13:52:47.002346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:33.396 [2024-10-15 13:52:47.002353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.396 [2024-10-15 13:52:47.002382] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:33.396 [2024-10-15 13:52:47.002393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.396 [2024-10-15 13:52:47.002401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:33.396 [2024-10-15 13:52:47.002408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:33.396 [2024-10-15 13:52:47.002415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.396 [2024-10-15 13:52:47.025882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.396 [2024-10-15 13:52:47.026044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:33.396 [2024-10-15 13:52:47.026062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.446 ms 00:18:33.396 [2024-10-15 13:52:47.026070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.396 [2024-10-15 13:52:47.026160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.396 [2024-10-15 13:52:47.026171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:33.396 [2024-10-15 13:52:47.026179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:33.396 [2024-10-15 13:52:47.026187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.396 [2024-10-15 13:52:47.026985] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:33.396 [2024-10-15 13:52:47.029921] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 288.266 ms, result 0 00:18:33.396 [2024-10-15 13:52:47.030722] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:33.396 [2024-10-15 13:52:47.043712] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:34.329  [2024-10-15T13:52:49.049Z] Copying: 36/256 [MB] (36 MBps) [2024-10-15T13:52:50.472Z] Copying: 79/256 [MB] (43 MBps) [2024-10-15T13:52:51.405Z] Copying: 124/256 [MB] (44 MBps) [2024-10-15T13:52:52.338Z] Copying: 166/256 [MB] (42 MBps) [2024-10-15T13:52:53.273Z] Copying: 202/256 [MB] (36 MBps) [2024-10-15T13:52:54.207Z] Copying: 220/256 [MB] (17 MBps) [2024-10-15T13:52:55.144Z] Copying: 240/256 [MB] (20 MBps) [2024-10-15T13:52:55.144Z] Copying: 256/256 [MB] (average 33 MBps)[2024-10-15 13:52:54.791936] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:41.356 [2024-10-15 13:52:54.801090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.356 [2024-10-15 13:52:54.801129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:41.356 [2024-10-15 13:52:54.801143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:41.356 [2024-10-15 13:52:54.801151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.356 [2024-10-15 13:52:54.801173] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:41.356 [2024-10-15 13:52:54.803675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.356 [2024-10-15 13:52:54.803709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:41.356 [2024-10-15 13:52:54.803720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.490 ms 00:18:41.356 [2024-10-15 13:52:54.803728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.356 [2024-10-15 13:52:54.803996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.356 [2024-10-15 13:52:54.804007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:41.356 [2024-10-15 13:52:54.804016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:18:41.356 [2024-10-15 13:52:54.804023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.356 [2024-10-15 13:52:54.807713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.356 [2024-10-15 13:52:54.807732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:41.356 [2024-10-15 13:52:54.807741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:18:41.356 [2024-10-15 13:52:54.807752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.356 [2024-10-15 13:52:54.814677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.356 [2024-10-15 13:52:54.814798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:41.356 [2024-10-15 13:52:54.814814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.908 ms 00:18:41.356 [2024-10-15 13:52:54.814822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.356 [2024-10-15 13:52:54.838255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.356 [2024-10-15 13:52:54.838291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:41.356 [2024-10-15 13:52:54.838303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.376 ms 00:18:41.356 [2024-10-15 13:52:54.838309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.356 [2024-10-15 13:52:54.852153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.356 [2024-10-15 13:52:54.852185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:41.356 [2024-10-15 13:52:54.852196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.807 ms 00:18:41.356 [2024-10-15 13:52:54.852209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.356 [2024-10-15 13:52:54.852477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.356 [2024-10-15 13:52:54.852495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:41.356 [2024-10-15 13:52:54.852504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:41.356 [2024-10-15 13:52:54.852512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.356 [2024-10-15 13:52:54.875656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.356 [2024-10-15 13:52:54.875685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:41.356 [2024-10-15 13:52:54.875696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.120 ms 00:18:41.356 [2024-10-15 13:52:54.875703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.356 [2024-10-15 13:52:54.898770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.356 [2024-10-15 13:52:54.898800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:41.356 [2024-10-15 13:52:54.898810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.033 ms 00:18:41.356 [2024-10-15 13:52:54.898816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.356 [2024-10-15 13:52:54.921665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.356 [2024-10-15 13:52:54.921699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:41.356 [2024-10-15 13:52:54.921709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.814 ms 00:18:41.356 [2024-10-15 13:52:54.921715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.356 [2024-10-15 13:52:54.945173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.356 [2024-10-15 13:52:54.945211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:41.356 [2024-10-15 13:52:54.945234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.393 ms 00:18:41.356 [2024-10-15 13:52:54.945242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.356 [2024-10-15 13:52:54.945279] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:41.356 [2024-10-15 13:52:54.945293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:41.356 [2024-10-15 13:52:54.945308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:41.356 [2024-10-15 13:52:54.945329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:41.356 [2024-10-15 13:52:54.945337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:41.356 [2024-10-15 13:52:54.945345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:41.356 [2024-10-15 13:52:54.945352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:41.356 [2024-10-15 13:52:54.945360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:41.356 [2024-10-15 13:52:54.945368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:41.356 [2024-10-15 13:52:54.945375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:41.356 [2024-10-15 13:52:54.945383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:41.356 [2024-10-15 13:52:54.945390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.945997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.946006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.946014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.946027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.946034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.946042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.946049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.946056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:41.357 [2024-10-15 13:52:54.946072] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:41.357 [2024-10-15 13:52:54.946080] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f658c17-becd-47e5-8bb1-b864f80d9f09 00:18:41.357 [2024-10-15 13:52:54.946088] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:41.357 [2024-10-15 13:52:54.946095] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:41.357 [2024-10-15 13:52:54.946102] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:41.357 [2024-10-15 13:52:54.946109] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:41.357 [2024-10-15 13:52:54.946118] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:41.358 [2024-10-15 13:52:54.946125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:41.358 [2024-10-15 13:52:54.946132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:41.358 [2024-10-15 13:52:54.946138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:41.358 [2024-10-15 13:52:54.946144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:41.358 [2024-10-15 13:52:54.946151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.358 [2024-10-15 13:52:54.946158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:41.358 [2024-10-15 13:52:54.946166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:18:41.358 [2024-10-15 13:52:54.946175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:54.958677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.358 [2024-10-15 13:52:54.958712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:41.358 [2024-10-15 13:52:54.958722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.473 ms 00:18:41.358 [2024-10-15 13:52:54.958730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:54.959086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.358 [2024-10-15 13:52:54.959095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:41.358 [2024-10-15 13:52:54.959108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:18:41.358 [2024-10-15 13:52:54.959115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:54.993969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.358 [2024-10-15 13:52:54.994002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:41.358 [2024-10-15 13:52:54.994012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.358 [2024-10-15 13:52:54.994019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:54.994108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.358 [2024-10-15 13:52:54.994117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:41.358 [2024-10-15 13:52:54.994130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.358 [2024-10-15 13:52:54.994137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:54.994176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.358 [2024-10-15 13:52:54.994185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:41.358 [2024-10-15 13:52:54.994193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.358 [2024-10-15 13:52:54.994200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:54.994216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.358 [2024-10-15 13:52:54.994240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:41.358 [2024-10-15 13:52:54.994252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.358 [2024-10-15 13:52:54.994262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:55.069844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.358 [2024-10-15 13:52:55.069891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:41.358 [2024-10-15 13:52:55.069902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.358 [2024-10-15 13:52:55.069910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:55.133303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.358 [2024-10-15 13:52:55.133349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:41.358 [2024-10-15 13:52:55.133364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.358 [2024-10-15 13:52:55.133372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:55.133437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.358 [2024-10-15 13:52:55.133446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:41.358 [2024-10-15 13:52:55.133459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.358 [2024-10-15 13:52:55.133467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:55.133494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.358 [2024-10-15 13:52:55.133503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:41.358 [2024-10-15 13:52:55.133510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.358 [2024-10-15 13:52:55.133518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:55.133604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.358 [2024-10-15 13:52:55.133613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:41.358 [2024-10-15 13:52:55.133621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.358 [2024-10-15 13:52:55.133628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:55.133657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.358 [2024-10-15 13:52:55.133666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:41.358 [2024-10-15 13:52:55.133673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.358 [2024-10-15 13:52:55.133681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:55.133716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.358 [2024-10-15 13:52:55.133725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:41.358 [2024-10-15 13:52:55.133732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.358 [2024-10-15 13:52:55.133739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:55.133780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.358 [2024-10-15 13:52:55.133789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:41.358 [2024-10-15 13:52:55.133797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.358 [2024-10-15 13:52:55.133804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.358 [2024-10-15 13:52:55.133936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.837 ms, result 0 00:18:42.299 00:18:42.299 00:18:42.299 13:52:55 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:42.299 13:52:55 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:42.865 13:52:56 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:42.865 [2024-10-15 13:52:56.432229] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:18:42.865 [2024-10-15 13:52:56.432366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74072 ] 00:18:42.865 [2024-10-15 13:52:56.581417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.124 [2024-10-15 13:52:56.679217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.383 [2024-10-15 13:52:56.929512] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.383 [2024-10-15 13:52:56.929575] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.383 [2024-10-15 13:52:57.083724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.383 [2024-10-15 13:52:57.083917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:43.383 [2024-10-15 13:52:57.083942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:43.383 [2024-10-15 13:52:57.083959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.383 [2024-10-15 13:52:57.086568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.383 [2024-10-15 13:52:57.086602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.383 [2024-10-15 13:52:57.086612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:18:43.383 [2024-10-15 13:52:57.086619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.383 [2024-10-15 13:52:57.086684] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:43.383 [2024-10-15 13:52:57.087402] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:43.383 [2024-10-15 13:52:57.087423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.383 [2024-10-15 13:52:57.087431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.383 [2024-10-15 13:52:57.087439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:18:43.383 [2024-10-15 13:52:57.087447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.383 [2024-10-15 13:52:57.088526] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:43.383 [2024-10-15 13:52:57.100872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.383 [2024-10-15 13:52:57.100904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:43.383 [2024-10-15 13:52:57.100915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.348 ms 00:18:43.383 [2024-10-15 13:52:57.100926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.383 [2024-10-15 13:52:57.101005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.383 [2024-10-15 13:52:57.101016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:43.383 [2024-10-15 13:52:57.101024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:43.383 [2024-10-15 13:52:57.101032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.383 [2024-10-15 13:52:57.105692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.383 [2024-10-15 13:52:57.105725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.383 [2024-10-15 13:52:57.105734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.621 ms 00:18:43.383 [2024-10-15 13:52:57.105741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.383 [2024-10-15 13:52:57.105822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.383 [2024-10-15 13:52:57.105831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.383 [2024-10-15 13:52:57.105839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:43.383 [2024-10-15 13:52:57.105846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.383 [2024-10-15 13:52:57.105869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.383 [2024-10-15 13:52:57.105877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:43.383 [2024-10-15 13:52:57.105885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:43.383 [2024-10-15 13:52:57.105895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.383 [2024-10-15 13:52:57.105914] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:43.383 [2024-10-15 13:52:57.109096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.383 [2024-10-15 13:52:57.109254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.383 [2024-10-15 13:52:57.109271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.186 ms 00:18:43.383 [2024-10-15 13:52:57.109279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.383 [2024-10-15 13:52:57.109316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.383 [2024-10-15 13:52:57.109324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:43.383 [2024-10-15 13:52:57.109332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:43.383 [2024-10-15 13:52:57.109339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.383 [2024-10-15 13:52:57.109356] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:43.383 [2024-10-15 13:52:57.109374] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:43.383 [2024-10-15 13:52:57.109410] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:43.383 [2024-10-15 13:52:57.109425] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:43.383 [2024-10-15 13:52:57.109526] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:43.383 [2024-10-15 13:52:57.109536] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:43.383 [2024-10-15 13:52:57.109546] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:43.383 [2024-10-15 13:52:57.109556] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:43.383 [2024-10-15 13:52:57.109564] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:43.383 [2024-10-15 13:52:57.109572] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:43.383 [2024-10-15 13:52:57.109581] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:43.383 [2024-10-15 13:52:57.109588] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:43.383 [2024-10-15 13:52:57.109596] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:43.383 [2024-10-15 13:52:57.109603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.383 [2024-10-15 13:52:57.109610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:43.383 [2024-10-15 13:52:57.109617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:18:43.383 [2024-10-15 13:52:57.109625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.383 [2024-10-15 13:52:57.109711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.383 [2024-10-15 13:52:57.109719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:43.383 [2024-10-15 13:52:57.109726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:43.383 [2024-10-15 13:52:57.109736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.383 [2024-10-15 13:52:57.109847] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:43.383 [2024-10-15 13:52:57.109857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:43.383 [2024-10-15 13:52:57.109865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.383 [2024-10-15 13:52:57.109873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.383 [2024-10-15 13:52:57.109880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:43.383 [2024-10-15 13:52:57.109887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:43.383 [2024-10-15 13:52:57.109893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:43.383 [2024-10-15 13:52:57.109900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:43.383 [2024-10-15 13:52:57.109908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:43.383 [2024-10-15 13:52:57.109915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.383 [2024-10-15 13:52:57.109921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:43.383 [2024-10-15 13:52:57.109928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:43.383 [2024-10-15 13:52:57.109935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.383 [2024-10-15 13:52:57.109947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:43.383 [2024-10-15 13:52:57.109954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:43.383 [2024-10-15 13:52:57.109960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.383 [2024-10-15 13:52:57.109968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:43.383 [2024-10-15 13:52:57.109975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:43.383 [2024-10-15 13:52:57.109981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.383 [2024-10-15 13:52:57.109988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:43.383 [2024-10-15 13:52:57.109994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:43.383 [2024-10-15 13:52:57.110001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.383 [2024-10-15 13:52:57.110007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:43.383 [2024-10-15 13:52:57.110013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:43.383 [2024-10-15 13:52:57.110019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.383 [2024-10-15 13:52:57.110026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:43.383 [2024-10-15 13:52:57.110033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:43.383 [2024-10-15 13:52:57.110039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.383 [2024-10-15 13:52:57.110045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:43.383 [2024-10-15 13:52:57.110051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:43.383 [2024-10-15 13:52:57.110058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.383 [2024-10-15 13:52:57.110064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:43.383 [2024-10-15 13:52:57.110070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:43.383 [2024-10-15 13:52:57.110077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.383 [2024-10-15 13:52:57.110083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:43.383 [2024-10-15 13:52:57.110089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:43.383 [2024-10-15 13:52:57.110095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.383 [2024-10-15 13:52:57.110102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:43.383 [2024-10-15 13:52:57.110108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:43.383 [2024-10-15 13:52:57.110114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.383 [2024-10-15 13:52:57.110120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:43.383 [2024-10-15 13:52:57.110126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:43.384 [2024-10-15 13:52:57.110133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.384 [2024-10-15 13:52:57.110139] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:43.384 [2024-10-15 13:52:57.110147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:43.384 [2024-10-15 13:52:57.110153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.384 [2024-10-15 13:52:57.110160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.384 [2024-10-15 13:52:57.110167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:43.384 [2024-10-15 13:52:57.110175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:43.384 [2024-10-15 13:52:57.110181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:43.384 [2024-10-15 13:52:57.110188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:43.384 [2024-10-15 13:52:57.110194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:43.384 [2024-10-15 13:52:57.110200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:43.384 [2024-10-15 13:52:57.110208] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:43.384 [2024-10-15 13:52:57.110231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.384 [2024-10-15 13:52:57.110240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:43.384 [2024-10-15 13:52:57.110247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:43.384 [2024-10-15 13:52:57.110254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:43.384 [2024-10-15 13:52:57.110261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:43.384 [2024-10-15 13:52:57.110268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:43.384 [2024-10-15 13:52:57.110275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:43.384 [2024-10-15 13:52:57.110282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:43.384 [2024-10-15 13:52:57.110289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:43.384 [2024-10-15 13:52:57.110296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:43.384 [2024-10-15 13:52:57.110303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:43.384 [2024-10-15 13:52:57.110310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:43.384 [2024-10-15 13:52:57.110316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:43.384 [2024-10-15 13:52:57.110323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:43.384 [2024-10-15 13:52:57.110330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:43.384 [2024-10-15 13:52:57.110337] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:43.384 [2024-10-15 13:52:57.110345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.384 [2024-10-15 13:52:57.110353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:43.384 [2024-10-15 13:52:57.110360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:43.384 [2024-10-15 13:52:57.110367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:43.384 [2024-10-15 13:52:57.110374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:43.384 [2024-10-15 13:52:57.110382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.384 [2024-10-15 13:52:57.110389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:43.384 [2024-10-15 13:52:57.110396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:18:43.384 [2024-10-15 13:52:57.110405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.384 [2024-10-15 13:52:57.135732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.384 [2024-10-15 13:52:57.135765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.384 [2024-10-15 13:52:57.135774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.277 ms 00:18:43.384 [2024-10-15 13:52:57.135782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.384 [2024-10-15 13:52:57.135894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.384 [2024-10-15 13:52:57.135903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:43.384 [2024-10-15 13:52:57.135911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:43.384 [2024-10-15 13:52:57.135921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.178774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.178813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.642 [2024-10-15 13:52:57.178825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.832 ms 00:18:43.642 [2024-10-15 13:52:57.178833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.178926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.178937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.642 [2024-10-15 13:52:57.178946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:43.642 [2024-10-15 13:52:57.178953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.179284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.179302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.642 [2024-10-15 13:52:57.179313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:18:43.642 [2024-10-15 13:52:57.179324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.179464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.179476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.642 [2024-10-15 13:52:57.179484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:18:43.642 [2024-10-15 13:52:57.179492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.192691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.192828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.642 [2024-10-15 13:52:57.192845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.173 ms 00:18:43.642 [2024-10-15 13:52:57.192852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.204979] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:43.642 [2024-10-15 13:52:57.205011] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:43.642 [2024-10-15 13:52:57.205022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.205029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:43.642 [2024-10-15 13:52:57.205038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.069 ms 00:18:43.642 [2024-10-15 13:52:57.205045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.228846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.228886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:43.642 [2024-10-15 13:52:57.228896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.732 ms 00:18:43.642 [2024-10-15 13:52:57.228904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.240350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.240379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:43.642 [2024-10-15 13:52:57.240388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.392 ms 00:18:43.642 [2024-10-15 13:52:57.240395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.251695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.251818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:43.642 [2024-10-15 13:52:57.251841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.238 ms 00:18:43.642 [2024-10-15 13:52:57.251851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.252487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.252511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:43.642 [2024-10-15 13:52:57.252520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:18:43.642 [2024-10-15 13:52:57.252528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.306409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.306585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:43.642 [2024-10-15 13:52:57.306607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.858 ms 00:18:43.642 [2024-10-15 13:52:57.306615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.317189] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:43.642 [2024-10-15 13:52:57.330767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.330805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:43.642 [2024-10-15 13:52:57.330818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.824 ms 00:18:43.642 [2024-10-15 13:52:57.330826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.330907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.330921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:43.642 [2024-10-15 13:52:57.330930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:43.642 [2024-10-15 13:52:57.330938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.330981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.330990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:43.642 [2024-10-15 13:52:57.330997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:43.642 [2024-10-15 13:52:57.331005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.642 [2024-10-15 13:52:57.331029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.642 [2024-10-15 13:52:57.331040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:43.642 [2024-10-15 13:52:57.331050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:43.643 [2024-10-15 13:52:57.331057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.643 [2024-10-15 13:52:57.331084] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:43.643 [2024-10-15 13:52:57.331093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.643 [2024-10-15 13:52:57.331100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:43.643 [2024-10-15 13:52:57.331108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:43.643 [2024-10-15 13:52:57.331115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.643 [2024-10-15 13:52:57.354567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.643 [2024-10-15 13:52:57.354605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:43.643 [2024-10-15 13:52:57.354616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.430 ms 00:18:43.643 [2024-10-15 13:52:57.354624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.643 [2024-10-15 13:52:57.354712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.643 [2024-10-15 13:52:57.354722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:43.643 [2024-10-15 13:52:57.354731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:43.643 [2024-10-15 13:52:57.354738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.643 [2024-10-15 13:52:57.355488] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:43.643 [2024-10-15 13:52:57.358514] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 271.462 ms, result 0 00:18:43.643 [2024-10-15 13:52:57.359302] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:43.643 [2024-10-15 13:52:57.372295] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:43.901  [2024-10-15T13:52:57.689Z] Copying: 4096/4096 [kB] (average 39 MBps)[2024-10-15 13:52:57.477924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:43.901 [2024-10-15 13:52:57.488145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.901 [2024-10-15 13:52:57.488290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:43.901 [2024-10-15 13:52:57.488312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:43.901 [2024-10-15 13:52:57.488320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.901 [2024-10-15 13:52:57.488345] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:43.901 [2024-10-15 13:52:57.490865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.901 [2024-10-15 13:52:57.490896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:43.901 [2024-10-15 13:52:57.490907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.508 ms 00:18:43.901 [2024-10-15 13:52:57.490914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.901 [2024-10-15 13:52:57.492821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.901 [2024-10-15 13:52:57.492852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:43.901 [2024-10-15 13:52:57.492861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.886 ms 00:18:43.901 [2024-10-15 13:52:57.492868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.901 [2024-10-15 13:52:57.496797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.901 [2024-10-15 13:52:57.496822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:43.901 [2024-10-15 13:52:57.496831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.913 ms 00:18:43.901 [2024-10-15 13:52:57.496842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.902 [2024-10-15 13:52:57.503782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.902 [2024-10-15 13:52:57.503888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:43.902 [2024-10-15 13:52:57.503902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.915 ms 00:18:43.902 [2024-10-15 13:52:57.503909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.902 [2024-10-15 13:52:57.526351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.902 [2024-10-15 13:52:57.526461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:43.902 [2024-10-15 13:52:57.526477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.370 ms 00:18:43.902 [2024-10-15 13:52:57.526484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.902 [2024-10-15 13:52:57.540400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.902 [2024-10-15 13:52:57.540432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:43.902 [2024-10-15 13:52:57.540443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.884 ms 00:18:43.902 [2024-10-15 13:52:57.540455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.902 [2024-10-15 13:52:57.540571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.902 [2024-10-15 13:52:57.540580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:43.902 [2024-10-15 13:52:57.540588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:43.902 [2024-10-15 13:52:57.540595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.902 [2024-10-15 13:52:57.563662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.902 [2024-10-15 13:52:57.563690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:43.902 [2024-10-15 13:52:57.563700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.045 ms 00:18:43.902 [2024-10-15 13:52:57.563706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.902 [2024-10-15 13:52:57.585956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.902 [2024-10-15 13:52:57.586062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:43.902 [2024-10-15 13:52:57.586076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.218 ms 00:18:43.902 [2024-10-15 13:52:57.586082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.902 [2024-10-15 13:52:57.607889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.902 [2024-10-15 13:52:57.607917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:43.902 [2024-10-15 13:52:57.607927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.778 ms 00:18:43.902 [2024-10-15 13:52:57.607934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.902 [2024-10-15 13:52:57.629912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.902 [2024-10-15 13:52:57.629943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:43.902 [2024-10-15 13:52:57.629952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.905 ms 00:18:43.902 [2024-10-15 13:52:57.629960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.902 [2024-10-15 13:52:57.629992] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:43.902 [2024-10-15 13:52:57.630005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:43.902 [2024-10-15 13:52:57.630663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:43.903 [2024-10-15 13:52:57.630937] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:43.903 [2024-10-15 13:52:57.630945] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f658c17-becd-47e5-8bb1-b864f80d9f09 00:18:43.903 [2024-10-15 13:52:57.630953] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:43.903 [2024-10-15 13:52:57.630960] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:43.903 [2024-10-15 13:52:57.630966] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:43.903 [2024-10-15 13:52:57.630973] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:43.903 [2024-10-15 13:52:57.630980] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:43.903 [2024-10-15 13:52:57.630987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:43.903 [2024-10-15 13:52:57.630994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:43.903 [2024-10-15 13:52:57.631000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:43.903 [2024-10-15 13:52:57.631006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:43.903 [2024-10-15 13:52:57.631013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.903 [2024-10-15 13:52:57.631020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:43.903 [2024-10-15 13:52:57.631028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:18:43.903 [2024-10-15 13:52:57.631038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.903 [2024-10-15 13:52:57.643333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.903 [2024-10-15 13:52:57.643430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:43.903 [2024-10-15 13:52:57.643477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.275 ms 00:18:43.903 [2024-10-15 13:52:57.643499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.903 [2024-10-15 13:52:57.643881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.903 [2024-10-15 13:52:57.643918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:43.903 [2024-10-15 13:52:57.644195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:18:43.903 [2024-10-15 13:52:57.644264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.903 [2024-10-15 13:52:57.678898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.903 [2024-10-15 13:52:57.679009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.903 [2024-10-15 13:52:57.679057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.903 [2024-10-15 13:52:57.679079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.903 [2024-10-15 13:52:57.679156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.903 [2024-10-15 13:52:57.679240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.903 [2024-10-15 13:52:57.679272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.903 [2024-10-15 13:52:57.679291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.903 [2024-10-15 13:52:57.679375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.903 [2024-10-15 13:52:57.679399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.903 [2024-10-15 13:52:57.679419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.903 [2024-10-15 13:52:57.679437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.903 [2024-10-15 13:52:57.679495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.903 [2024-10-15 13:52:57.679516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.903 [2024-10-15 13:52:57.679564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.903 [2024-10-15 13:52:57.679588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.161 [2024-10-15 13:52:57.755886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.161 [2024-10-15 13:52:57.756018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.161 [2024-10-15 13:52:57.756071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.161 [2024-10-15 13:52:57.756093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.161 [2024-10-15 13:52:57.818756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.161 [2024-10-15 13:52:57.818898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.161 [2024-10-15 13:52:57.818962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.161 [2024-10-15 13:52:57.818986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.161 [2024-10-15 13:52:57.819052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.161 [2024-10-15 13:52:57.819113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:44.161 [2024-10-15 13:52:57.819162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.161 [2024-10-15 13:52:57.819180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.161 [2024-10-15 13:52:57.819217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.161 [2024-10-15 13:52:57.819256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:44.161 [2024-10-15 13:52:57.819276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.161 [2024-10-15 13:52:57.819294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.161 [2024-10-15 13:52:57.819724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.161 [2024-10-15 13:52:57.819746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:44.161 [2024-10-15 13:52:57.819756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.161 [2024-10-15 13:52:57.819763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.161 [2024-10-15 13:52:57.819795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.161 [2024-10-15 13:52:57.819804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:44.161 [2024-10-15 13:52:57.819812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.161 [2024-10-15 13:52:57.819819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.161 [2024-10-15 13:52:57.819859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.161 [2024-10-15 13:52:57.819867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:44.161 [2024-10-15 13:52:57.819875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.161 [2024-10-15 13:52:57.819882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.161 [2024-10-15 13:52:57.819924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.161 [2024-10-15 13:52:57.819933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:44.161 [2024-10-15 13:52:57.819941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.161 [2024-10-15 13:52:57.819948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.161 [2024-10-15 13:52:57.820091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.935 ms, result 0 00:18:44.725 00:18:44.725 00:18:44.725 13:52:58 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74103 00:18:44.725 13:52:58 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:44.725 13:52:58 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74103 00:18:44.725 13:52:58 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74103 ']' 00:18:44.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.725 13:52:58 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.725 13:52:58 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.725 13:52:58 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.725 13:52:58 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.726 13:52:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:44.984 [2024-10-15 13:52:58.563914] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:18:44.984 [2024-10-15 13:52:58.564042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74103 ] 00:18:44.984 [2024-10-15 13:52:58.712699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.242 [2024-10-15 13:52:58.808765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.809 13:52:59 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.809 13:52:59 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:45.809 13:52:59 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:46.067 [2024-10-15 13:52:59.616180] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:46.067 [2024-10-15 13:52:59.616251] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:46.067 [2024-10-15 13:52:59.786432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.067 [2024-10-15 13:52:59.786478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:46.067 [2024-10-15 13:52:59.786493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:46.067 [2024-10-15 13:52:59.786501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.067 [2024-10-15 13:52:59.789104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.067 [2024-10-15 13:52:59.789139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:46.067 [2024-10-15 13:52:59.789150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:18:46.067 [2024-10-15 13:52:59.789157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.067 [2024-10-15 13:52:59.789243] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:46.067 [2024-10-15 13:52:59.789961] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:46.067 [2024-10-15 13:52:59.789994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.067 [2024-10-15 13:52:59.790003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:46.067 [2024-10-15 13:52:59.790013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:18:46.067 [2024-10-15 13:52:59.790020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.067 [2024-10-15 13:52:59.791083] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:46.067 [2024-10-15 13:52:59.803232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.067 [2024-10-15 13:52:59.803360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:46.067 [2024-10-15 13:52:59.803378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.154 ms 00:18:46.067 [2024-10-15 13:52:59.803393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.067 [2024-10-15 13:52:59.803469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.067 [2024-10-15 13:52:59.803481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:46.067 [2024-10-15 13:52:59.803489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:46.067 [2024-10-15 13:52:59.803498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.067 [2024-10-15 13:52:59.808160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.067 [2024-10-15 13:52:59.808193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:46.067 [2024-10-15 13:52:59.808202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.617 ms 00:18:46.067 [2024-10-15 13:52:59.808211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.067 [2024-10-15 13:52:59.808328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.067 [2024-10-15 13:52:59.808346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:46.067 [2024-10-15 13:52:59.808358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:46.067 [2024-10-15 13:52:59.808371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.067 [2024-10-15 13:52:59.808406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.067 [2024-10-15 13:52:59.808433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:46.067 [2024-10-15 13:52:59.808446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:46.067 [2024-10-15 13:52:59.808461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.067 [2024-10-15 13:52:59.808492] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:46.067 [2024-10-15 13:52:59.811630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.067 [2024-10-15 13:52:59.811657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:46.067 [2024-10-15 13:52:59.811667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.143 ms 00:18:46.067 [2024-10-15 13:52:59.811675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.067 [2024-10-15 13:52:59.811710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.067 [2024-10-15 13:52:59.811718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:46.067 [2024-10-15 13:52:59.811727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:46.067 [2024-10-15 13:52:59.811734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.067 [2024-10-15 13:52:59.811755] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:46.067 [2024-10-15 13:52:59.811774] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:46.067 [2024-10-15 13:52:59.811813] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:46.067 [2024-10-15 13:52:59.811828] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:46.067 [2024-10-15 13:52:59.811935] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:46.067 [2024-10-15 13:52:59.811945] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:46.067 [2024-10-15 13:52:59.811968] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:46.067 [2024-10-15 13:52:59.811980] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:46.067 [2024-10-15 13:52:59.811993] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:46.067 [2024-10-15 13:52:59.812001] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:46.067 [2024-10-15 13:52:59.812009] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:46.067 [2024-10-15 13:52:59.812017] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:46.067 [2024-10-15 13:52:59.812026] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:46.067 [2024-10-15 13:52:59.812034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.067 [2024-10-15 13:52:59.812042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:46.067 [2024-10-15 13:52:59.812049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:18:46.067 [2024-10-15 13:52:59.812058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.067 [2024-10-15 13:52:59.812144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.067 [2024-10-15 13:52:59.812154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:46.067 [2024-10-15 13:52:59.812163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:46.067 [2024-10-15 13:52:59.812171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.067 [2024-10-15 13:52:59.812291] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:46.067 [2024-10-15 13:52:59.812305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:46.067 [2024-10-15 13:52:59.812314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.067 [2024-10-15 13:52:59.812323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.067 [2024-10-15 13:52:59.812330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:46.067 [2024-10-15 13:52:59.812338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:46.067 [2024-10-15 13:52:59.812345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:46.067 [2024-10-15 13:52:59.812357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:46.067 [2024-10-15 13:52:59.812364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:46.067 [2024-10-15 13:52:59.812372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.067 [2024-10-15 13:52:59.812379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:46.067 [2024-10-15 13:52:59.812387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:46.067 [2024-10-15 13:52:59.812393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.067 [2024-10-15 13:52:59.812401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:46.067 [2024-10-15 13:52:59.812408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:46.067 [2024-10-15 13:52:59.812415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.067 [2024-10-15 13:52:59.812422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:46.067 [2024-10-15 13:52:59.812430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:46.067 [2024-10-15 13:52:59.812436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.067 [2024-10-15 13:52:59.812445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:46.067 [2024-10-15 13:52:59.812456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:46.067 [2024-10-15 13:52:59.812464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.067 [2024-10-15 13:52:59.812470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:46.067 [2024-10-15 13:52:59.812480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:46.067 [2024-10-15 13:52:59.812487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.067 [2024-10-15 13:52:59.812495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:46.067 [2024-10-15 13:52:59.812501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:46.067 [2024-10-15 13:52:59.812509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.067 [2024-10-15 13:52:59.812515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:46.067 [2024-10-15 13:52:59.812523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:46.067 [2024-10-15 13:52:59.812529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.067 [2024-10-15 13:52:59.812538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:46.067 [2024-10-15 13:52:59.812544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:46.067 [2024-10-15 13:52:59.812552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.067 [2024-10-15 13:52:59.812564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:46.067 [2024-10-15 13:52:59.812572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:46.067 [2024-10-15 13:52:59.812578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.067 [2024-10-15 13:52:59.812586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:46.067 [2024-10-15 13:52:59.812593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:46.068 [2024-10-15 13:52:59.812602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.068 [2024-10-15 13:52:59.812608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:46.068 [2024-10-15 13:52:59.812616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:46.068 [2024-10-15 13:52:59.812622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.068 [2024-10-15 13:52:59.812630] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:46.068 [2024-10-15 13:52:59.812637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:46.068 [2024-10-15 13:52:59.812646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.068 [2024-10-15 13:52:59.812654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.068 [2024-10-15 13:52:59.812663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:46.068 [2024-10-15 13:52:59.812669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:46.068 [2024-10-15 13:52:59.812677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:46.068 [2024-10-15 13:52:59.812684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:46.068 [2024-10-15 13:52:59.812691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:46.068 [2024-10-15 13:52:59.812698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:46.068 [2024-10-15 13:52:59.812707] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:46.068 [2024-10-15 13:52:59.812716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.068 [2024-10-15 13:52:59.812728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:46.068 [2024-10-15 13:52:59.812736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:46.068 [2024-10-15 13:52:59.812745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:46.068 [2024-10-15 13:52:59.812751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:46.068 [2024-10-15 13:52:59.812760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:46.068 [2024-10-15 13:52:59.812767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:46.068 [2024-10-15 13:52:59.812775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:46.068 [2024-10-15 13:52:59.812782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:46.068 [2024-10-15 13:52:59.812790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:46.068 [2024-10-15 13:52:59.812797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:46.068 [2024-10-15 13:52:59.812806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:46.068 [2024-10-15 13:52:59.812813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:46.068 [2024-10-15 13:52:59.812821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:46.068 [2024-10-15 13:52:59.812828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:46.068 [2024-10-15 13:52:59.812837] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:46.068 [2024-10-15 13:52:59.812844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.068 [2024-10-15 13:52:59.812855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:46.068 [2024-10-15 13:52:59.812862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:46.068 [2024-10-15 13:52:59.812871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:46.068 [2024-10-15 13:52:59.812877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:46.068 [2024-10-15 13:52:59.812886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.068 [2024-10-15 13:52:59.812893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:46.068 [2024-10-15 13:52:59.812902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:18:46.068 [2024-10-15 13:52:59.812908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.068 [2024-10-15 13:52:59.838186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.068 [2024-10-15 13:52:59.838236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.068 [2024-10-15 13:52:59.838248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.209 ms 00:18:46.068 [2024-10-15 13:52:59.838256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.068 [2024-10-15 13:52:59.838370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.068 [2024-10-15 13:52:59.838404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:46.068 [2024-10-15 13:52:59.838418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:46.068 [2024-10-15 13:52:59.838429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:52:59.868277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:52:59.868308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.328 [2024-10-15 13:52:59.868320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.820 ms 00:18:46.328 [2024-10-15 13:52:59.868330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:52:59.868385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:52:59.868394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.328 [2024-10-15 13:52:59.868403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:46.328 [2024-10-15 13:52:59.868410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:52:59.868705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:52:59.868717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.328 [2024-10-15 13:52:59.868727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:18:46.328 [2024-10-15 13:52:59.868734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:52:59.868857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:52:59.868871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.328 [2024-10-15 13:52:59.868881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:46.328 [2024-10-15 13:52:59.868888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:52:59.882880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:52:59.882909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.328 [2024-10-15 13:52:59.882921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.970 ms 00:18:46.328 [2024-10-15 13:52:59.882928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:52:59.895297] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:46.328 [2024-10-15 13:52:59.895331] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:46.328 [2024-10-15 13:52:59.895344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:52:59.895352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:46.328 [2024-10-15 13:52:59.895362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.309 ms 00:18:46.328 [2024-10-15 13:52:59.895369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:52:59.919517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:52:59.919547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:46.328 [2024-10-15 13:52:59.919559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.080 ms 00:18:46.328 [2024-10-15 13:52:59.919567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:52:59.930845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:52:59.930875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:46.328 [2024-10-15 13:52:59.930889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.213 ms 00:18:46.328 [2024-10-15 13:52:59.930895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:52:59.942285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:52:59.942315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:46.328 [2024-10-15 13:52:59.942327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.325 ms 00:18:46.328 [2024-10-15 13:52:59.942335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:52:59.942958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:52:59.942980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:46.328 [2024-10-15 13:52:59.942991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:18:46.328 [2024-10-15 13:52:59.942998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:53:00.006100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:53:00.006164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:46.328 [2024-10-15 13:53:00.006182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.075 ms 00:18:46.328 [2024-10-15 13:53:00.006191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:53:00.016650] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:46.328 [2024-10-15 13:53:00.030723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:53:00.030774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:46.328 [2024-10-15 13:53:00.030791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.407 ms 00:18:46.328 [2024-10-15 13:53:00.030806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:53:00.030920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:53:00.030935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:46.328 [2024-10-15 13:53:00.030944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:46.328 [2024-10-15 13:53:00.030953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:53:00.030998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:53:00.031012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:46.328 [2024-10-15 13:53:00.031020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:46.328 [2024-10-15 13:53:00.031029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:53:00.031052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:53:00.031064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:46.328 [2024-10-15 13:53:00.031072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:46.328 [2024-10-15 13:53:00.031082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:53:00.031112] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:46.328 [2024-10-15 13:53:00.031125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:53:00.031132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:46.328 [2024-10-15 13:53:00.031142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:46.328 [2024-10-15 13:53:00.031151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:53:00.054769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:53:00.054896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:46.328 [2024-10-15 13:53:00.054917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.592 ms 00:18:46.328 [2024-10-15 13:53:00.054925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:53:00.055011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.328 [2024-10-15 13:53:00.055022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:46.328 [2024-10-15 13:53:00.055032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:46.328 [2024-10-15 13:53:00.055039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.328 [2024-10-15 13:53:00.055823] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:46.328 [2024-10-15 13:53:00.058669] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 269.116 ms, result 0 00:18:46.328 [2024-10-15 13:53:00.060095] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:46.328 Some configs were skipped because the RPC state that can call them passed over. 00:18:46.328 13:53:00 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:46.586 [2024-10-15 13:53:00.294708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.586 [2024-10-15 13:53:00.294893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:46.586 [2024-10-15 13:53:00.295012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:18:46.586 [2024-10-15 13:53:00.295048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.586 [2024-10-15 13:53:00.295102] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.995 ms, result 0 00:18:46.586 true 00:18:46.586 13:53:00 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:46.844 [2024-10-15 13:53:00.494629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.844 [2024-10-15 13:53:00.494783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:46.844 [2024-10-15 13:53:00.494891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:18:46.844 [2024-10-15 13:53:00.494933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.844 [2024-10-15 13:53:00.495009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.616 ms, result 0 00:18:46.844 true 00:18:46.844 13:53:00 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74103 00:18:46.844 13:53:00 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74103 ']' 00:18:46.844 13:53:00 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74103 00:18:46.844 13:53:00 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:46.844 13:53:00 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.844 13:53:00 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74103 00:18:46.844 13:53:00 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:46.844 13:53:00 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:46.844 killing process with pid 74103 00:18:46.844 13:53:00 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74103' 00:18:46.844 13:53:00 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74103 00:18:46.844 13:53:00 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74103 00:18:47.780 [2024-10-15 13:53:01.242569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.780 [2024-10-15 13:53:01.242760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:47.780 [2024-10-15 13:53:01.242832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:47.780 [2024-10-15 13:53:01.242859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.780 [2024-10-15 13:53:01.242915] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:47.780 [2024-10-15 13:53:01.245590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.780 [2024-10-15 13:53:01.245707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:47.780 [2024-10-15 13:53:01.245776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.631 ms 00:18:47.780 [2024-10-15 13:53:01.245799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.780 [2024-10-15 13:53:01.246100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.780 [2024-10-15 13:53:01.246167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:47.780 [2024-10-15 13:53:01.246255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:18:47.780 [2024-10-15 13:53:01.246279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.781 [2024-10-15 13:53:01.250286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.781 [2024-10-15 13:53:01.250389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:47.781 [2024-10-15 13:53:01.250450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.971 ms 00:18:47.781 [2024-10-15 13:53:01.250472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.781 [2024-10-15 13:53:01.256466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.781 [2024-10-15 13:53:01.256570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:47.781 [2024-10-15 13:53:01.256621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.931 ms 00:18:47.781 [2024-10-15 13:53:01.256639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.781 [2024-10-15 13:53:01.264305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.781 [2024-10-15 13:53:01.264427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:47.781 [2024-10-15 13:53:01.264515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.611 ms 00:18:47.781 [2024-10-15 13:53:01.264537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.781 [2024-10-15 13:53:01.270900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.781 [2024-10-15 13:53:01.270988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:47.781 [2024-10-15 13:53:01.271048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.314 ms 00:18:47.781 [2024-10-15 13:53:01.271067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.781 [2024-10-15 13:53:01.271181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.781 [2024-10-15 13:53:01.271200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:47.781 [2024-10-15 13:53:01.271217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:47.781 [2024-10-15 13:53:01.271287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.781 [2024-10-15 13:53:01.278873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.781 [2024-10-15 13:53:01.278957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:47.781 [2024-10-15 13:53:01.278998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.555 ms 00:18:47.781 [2024-10-15 13:53:01.279014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.781 [2024-10-15 13:53:01.286365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.781 [2024-10-15 13:53:01.286448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:47.781 [2024-10-15 13:53:01.286490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.313 ms 00:18:47.781 [2024-10-15 13:53:01.286506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.781 [2024-10-15 13:53:01.293234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.781 [2024-10-15 13:53:01.293328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:47.781 [2024-10-15 13:53:01.293370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.679 ms 00:18:47.781 [2024-10-15 13:53:01.293386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.781 [2024-10-15 13:53:01.300447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.781 [2024-10-15 13:53:01.300529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:47.781 [2024-10-15 13:53:01.300542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.001 ms 00:18:47.781 [2024-10-15 13:53:01.300547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.781 [2024-10-15 13:53:01.300574] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:47.781 [2024-10-15 13:53:01.300586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:47.781 [2024-10-15 13:53:01.300998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:47.782 [2024-10-15 13:53:01.301248] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:47.782 [2024-10-15 13:53:01.301257] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f658c17-becd-47e5-8bb1-b864f80d9f09 00:18:47.782 [2024-10-15 13:53:01.301268] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:47.782 [2024-10-15 13:53:01.301277] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:47.782 [2024-10-15 13:53:01.301284] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:47.782 [2024-10-15 13:53:01.301292] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:47.782 [2024-10-15 13:53:01.301297] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:47.782 [2024-10-15 13:53:01.301304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:47.782 [2024-10-15 13:53:01.301310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:47.782 [2024-10-15 13:53:01.301316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:47.782 [2024-10-15 13:53:01.301320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:47.782 [2024-10-15 13:53:01.301327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.782 [2024-10-15 13:53:01.301333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:47.782 [2024-10-15 13:53:01.301341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:18:47.782 [2024-10-15 13:53:01.301346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.311255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.782 [2024-10-15 13:53:01.311352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:47.782 [2024-10-15 13:53:01.311367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.892 ms 00:18:47.782 [2024-10-15 13:53:01.311373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.311667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.782 [2024-10-15 13:53:01.311675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:47.782 [2024-10-15 13:53:01.311683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:18:47.782 [2024-10-15 13:53:01.311689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.346191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.782 [2024-10-15 13:53:01.346237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:47.782 [2024-10-15 13:53:01.346247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.782 [2024-10-15 13:53:01.346254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.346340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.782 [2024-10-15 13:53:01.346348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:47.782 [2024-10-15 13:53:01.346372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.782 [2024-10-15 13:53:01.346378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.346421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.782 [2024-10-15 13:53:01.346428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:47.782 [2024-10-15 13:53:01.346438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.782 [2024-10-15 13:53:01.346443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.346459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.782 [2024-10-15 13:53:01.346465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:47.782 [2024-10-15 13:53:01.346472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.782 [2024-10-15 13:53:01.346478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.407345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.782 [2024-10-15 13:53:01.407389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:47.782 [2024-10-15 13:53:01.407400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.782 [2024-10-15 13:53:01.407406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.458027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.782 [2024-10-15 13:53:01.458068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:47.782 [2024-10-15 13:53:01.458079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.782 [2024-10-15 13:53:01.458085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.458160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.782 [2024-10-15 13:53:01.458170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.782 [2024-10-15 13:53:01.458179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.782 [2024-10-15 13:53:01.458185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.458208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.782 [2024-10-15 13:53:01.458215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.782 [2024-10-15 13:53:01.458238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.782 [2024-10-15 13:53:01.458248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.458350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.782 [2024-10-15 13:53:01.458363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.782 [2024-10-15 13:53:01.458376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.782 [2024-10-15 13:53:01.458386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.458428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.782 [2024-10-15 13:53:01.458442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:47.782 [2024-10-15 13:53:01.458450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.782 [2024-10-15 13:53:01.458456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.458486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.782 [2024-10-15 13:53:01.458492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.782 [2024-10-15 13:53:01.458503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.782 [2024-10-15 13:53:01.458509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.458542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.782 [2024-10-15 13:53:01.458550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.782 [2024-10-15 13:53:01.458557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.782 [2024-10-15 13:53:01.458563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.782 [2024-10-15 13:53:01.458671] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 216.086 ms, result 0 00:18:48.351 13:53:02 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:48.609 [2024-10-15 13:53:02.169208] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:18:48.609 [2024-10-15 13:53:02.169337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74150 ] 00:18:48.609 [2024-10-15 13:53:02.319244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.867 [2024-10-15 13:53:02.418750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.126 [2024-10-15 13:53:02.670547] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:49.126 [2024-10-15 13:53:02.670609] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:49.126 [2024-10-15 13:53:02.829141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.126 [2024-10-15 13:53:02.829201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:49.126 [2024-10-15 13:53:02.829214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:49.126 [2024-10-15 13:53:02.829236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.126 [2024-10-15 13:53:02.831979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.126 [2024-10-15 13:53:02.832012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:49.126 [2024-10-15 13:53:02.832022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.723 ms 00:18:49.126 [2024-10-15 13:53:02.832029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.126 [2024-10-15 13:53:02.832100] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:49.126 [2024-10-15 13:53:02.832810] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:49.126 [2024-10-15 13:53:02.832828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.126 [2024-10-15 13:53:02.832836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:49.126 [2024-10-15 13:53:02.832845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:18:49.126 [2024-10-15 13:53:02.832852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.126 [2024-10-15 13:53:02.834349] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:49.126 [2024-10-15 13:53:02.847381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.126 [2024-10-15 13:53:02.847519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:49.126 [2024-10-15 13:53:02.847538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.035 ms 00:18:49.126 [2024-10-15 13:53:02.847552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.126 [2024-10-15 13:53:02.847883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.126 [2024-10-15 13:53:02.847909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:49.126 [2024-10-15 13:53:02.847920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:49.126 [2024-10-15 13:53:02.847928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.126 [2024-10-15 13:53:02.852795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.126 [2024-10-15 13:53:02.852832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:49.126 [2024-10-15 13:53:02.852842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.812 ms 00:18:49.126 [2024-10-15 13:53:02.852849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.126 [2024-10-15 13:53:02.852938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.126 [2024-10-15 13:53:02.852948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:49.126 [2024-10-15 13:53:02.852956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:49.126 [2024-10-15 13:53:02.852963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.126 [2024-10-15 13:53:02.852987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.126 [2024-10-15 13:53:02.852995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:49.126 [2024-10-15 13:53:02.853003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:49.126 [2024-10-15 13:53:02.853012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.126 [2024-10-15 13:53:02.853033] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:49.126 [2024-10-15 13:53:02.856321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.126 [2024-10-15 13:53:02.856349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:49.126 [2024-10-15 13:53:02.856358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.294 ms 00:18:49.126 [2024-10-15 13:53:02.856365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.126 [2024-10-15 13:53:02.856397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.126 [2024-10-15 13:53:02.856405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:49.126 [2024-10-15 13:53:02.856413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:49.126 [2024-10-15 13:53:02.856420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.127 [2024-10-15 13:53:02.856436] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:49.127 [2024-10-15 13:53:02.856453] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:49.127 [2024-10-15 13:53:02.856489] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:49.127 [2024-10-15 13:53:02.856504] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:49.127 [2024-10-15 13:53:02.856605] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:49.127 [2024-10-15 13:53:02.856615] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:49.127 [2024-10-15 13:53:02.856625] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:49.127 [2024-10-15 13:53:02.856635] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:49.127 [2024-10-15 13:53:02.856643] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:49.127 [2024-10-15 13:53:02.856651] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:49.127 [2024-10-15 13:53:02.856660] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:49.127 [2024-10-15 13:53:02.856667] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:49.127 [2024-10-15 13:53:02.856675] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:49.127 [2024-10-15 13:53:02.856682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.127 [2024-10-15 13:53:02.856689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:49.127 [2024-10-15 13:53:02.856696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:18:49.127 [2024-10-15 13:53:02.856703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.127 [2024-10-15 13:53:02.856789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.127 [2024-10-15 13:53:02.856797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:49.127 [2024-10-15 13:53:02.856804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:49.127 [2024-10-15 13:53:02.856813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.127 [2024-10-15 13:53:02.856911] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:49.127 [2024-10-15 13:53:02.856920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:49.127 [2024-10-15 13:53:02.856928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:49.127 [2024-10-15 13:53:02.856936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.127 [2024-10-15 13:53:02.856943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:49.127 [2024-10-15 13:53:02.856950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:49.127 [2024-10-15 13:53:02.856956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:49.127 [2024-10-15 13:53:02.856963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:49.127 [2024-10-15 13:53:02.856971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:49.127 [2024-10-15 13:53:02.856977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:49.127 [2024-10-15 13:53:02.856983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:49.127 [2024-10-15 13:53:02.856990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:49.127 [2024-10-15 13:53:02.856996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:49.127 [2024-10-15 13:53:02.857008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:49.127 [2024-10-15 13:53:02.857015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:49.127 [2024-10-15 13:53:02.857021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.127 [2024-10-15 13:53:02.857028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:49.127 [2024-10-15 13:53:02.857035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:49.127 [2024-10-15 13:53:02.857041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.127 [2024-10-15 13:53:02.857048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:49.127 [2024-10-15 13:53:02.857055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:49.127 [2024-10-15 13:53:02.857061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:49.127 [2024-10-15 13:53:02.857068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:49.127 [2024-10-15 13:53:02.857074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:49.127 [2024-10-15 13:53:02.857080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:49.127 [2024-10-15 13:53:02.857087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:49.127 [2024-10-15 13:53:02.857093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:49.127 [2024-10-15 13:53:02.857099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:49.127 [2024-10-15 13:53:02.857105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:49.127 [2024-10-15 13:53:02.857112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:49.127 [2024-10-15 13:53:02.857118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:49.127 [2024-10-15 13:53:02.857124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:49.127 [2024-10-15 13:53:02.857130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:49.127 [2024-10-15 13:53:02.857136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:49.127 [2024-10-15 13:53:02.857143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:49.127 [2024-10-15 13:53:02.857149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:49.127 [2024-10-15 13:53:02.857155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:49.127 [2024-10-15 13:53:02.857162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:49.127 [2024-10-15 13:53:02.857168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:49.127 [2024-10-15 13:53:02.857175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.127 [2024-10-15 13:53:02.857181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:49.127 [2024-10-15 13:53:02.857188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:49.127 [2024-10-15 13:53:02.857194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.127 [2024-10-15 13:53:02.857200] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:49.127 [2024-10-15 13:53:02.857208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:49.127 [2024-10-15 13:53:02.857215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:49.127 [2024-10-15 13:53:02.857237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.127 [2024-10-15 13:53:02.857245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:49.127 [2024-10-15 13:53:02.857252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:49.127 [2024-10-15 13:53:02.857260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:49.127 [2024-10-15 13:53:02.857267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:49.127 [2024-10-15 13:53:02.857273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:49.127 [2024-10-15 13:53:02.857280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:49.127 [2024-10-15 13:53:02.857288] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:49.127 [2024-10-15 13:53:02.857299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:49.127 [2024-10-15 13:53:02.857307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:49.127 [2024-10-15 13:53:02.857315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:49.127 [2024-10-15 13:53:02.857322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:49.127 [2024-10-15 13:53:02.857342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:49.127 [2024-10-15 13:53:02.857349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:49.127 [2024-10-15 13:53:02.857356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:49.127 [2024-10-15 13:53:02.857363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:49.127 [2024-10-15 13:53:02.857369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:49.127 [2024-10-15 13:53:02.857376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:49.127 [2024-10-15 13:53:02.857383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:49.127 [2024-10-15 13:53:02.857389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:49.127 [2024-10-15 13:53:02.857396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:49.127 [2024-10-15 13:53:02.857403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:49.127 [2024-10-15 13:53:02.857409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:49.127 [2024-10-15 13:53:02.857416] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:49.127 [2024-10-15 13:53:02.857424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:49.127 [2024-10-15 13:53:02.857432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:49.127 [2024-10-15 13:53:02.857439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:49.127 [2024-10-15 13:53:02.857447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:49.127 [2024-10-15 13:53:02.857454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:49.128 [2024-10-15 13:53:02.857460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-10-15 13:53:02.857467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:49.128 [2024-10-15 13:53:02.857474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:18:49.128 [2024-10-15 13:53:02.857484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-10-15 13:53:02.883112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-10-15 13:53:02.883277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:49.128 [2024-10-15 13:53:02.883293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.569 ms 00:18:49.128 [2024-10-15 13:53:02.883302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-10-15 13:53:02.883421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-10-15 13:53:02.883431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:49.128 [2024-10-15 13:53:02.883439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:49.128 [2024-10-15 13:53:02.883449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.386 [2024-10-15 13:53:02.927232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.386 [2024-10-15 13:53:02.927395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:49.386 [2024-10-15 13:53:02.927414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.762 ms 00:18:49.386 [2024-10-15 13:53:02.927423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.386 [2024-10-15 13:53:02.927529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.386 [2024-10-15 13:53:02.927540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:49.386 [2024-10-15 13:53:02.927549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:49.386 [2024-10-15 13:53:02.927556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.386 [2024-10-15 13:53:02.927878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.386 [2024-10-15 13:53:02.927893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:49.386 [2024-10-15 13:53:02.927901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:18:49.386 [2024-10-15 13:53:02.927908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.386 [2024-10-15 13:53:02.928045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.386 [2024-10-15 13:53:02.928056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:49.386 [2024-10-15 13:53:02.928064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:49.386 [2024-10-15 13:53:02.928071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.386 [2024-10-15 13:53:02.941444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.386 [2024-10-15 13:53:02.941475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:49.386 [2024-10-15 13:53:02.941484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.353 ms 00:18:49.386 [2024-10-15 13:53:02.941492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.386 [2024-10-15 13:53:02.954387] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:49.386 [2024-10-15 13:53:02.954503] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:49.386 [2024-10-15 13:53:02.954517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.386 [2024-10-15 13:53:02.954525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:49.386 [2024-10-15 13:53:02.954534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.928 ms 00:18:49.386 [2024-10-15 13:53:02.954541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.386 [2024-10-15 13:53:02.978583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.386 [2024-10-15 13:53:02.978624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:49.386 [2024-10-15 13:53:02.978634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.976 ms 00:18:49.386 [2024-10-15 13:53:02.978641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.386 [2024-10-15 13:53:02.990572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.386 [2024-10-15 13:53:02.990610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:49.386 [2024-10-15 13:53:02.990621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.862 ms 00:18:49.386 [2024-10-15 13:53:02.990628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.386 [2024-10-15 13:53:03.002324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.386 [2024-10-15 13:53:03.002356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:49.386 [2024-10-15 13:53:03.002366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.616 ms 00:18:49.386 [2024-10-15 13:53:03.002373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.386 [2024-10-15 13:53:03.002991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.386 [2024-10-15 13:53:03.003011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:49.386 [2024-10-15 13:53:03.003020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:18:49.386 [2024-10-15 13:53:03.003027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.386 [2024-10-15 13:53:03.058041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.386 [2024-10-15 13:53:03.058271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:49.386 [2024-10-15 13:53:03.058291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.991 ms 00:18:49.386 [2024-10-15 13:53:03.058299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.386 [2024-10-15 13:53:03.068674] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:49.386 [2024-10-15 13:53:03.082762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.387 [2024-10-15 13:53:03.082797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:49.387 [2024-10-15 13:53:03.082810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.275 ms 00:18:49.387 [2024-10-15 13:53:03.082818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.387 [2024-10-15 13:53:03.082899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.387 [2024-10-15 13:53:03.082911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:49.387 [2024-10-15 13:53:03.082920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:49.387 [2024-10-15 13:53:03.082928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.387 [2024-10-15 13:53:03.082976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.387 [2024-10-15 13:53:03.082984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:49.387 [2024-10-15 13:53:03.082992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:49.387 [2024-10-15 13:53:03.082999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.387 [2024-10-15 13:53:03.083021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.387 [2024-10-15 13:53:03.083031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:49.387 [2024-10-15 13:53:03.083041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:49.387 [2024-10-15 13:53:03.083048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.387 [2024-10-15 13:53:03.083079] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:49.387 [2024-10-15 13:53:03.083089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.387 [2024-10-15 13:53:03.083096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:49.387 [2024-10-15 13:53:03.083103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:49.387 [2024-10-15 13:53:03.083110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.387 [2024-10-15 13:53:03.106655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.387 [2024-10-15 13:53:03.106692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:49.387 [2024-10-15 13:53:03.106703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.526 ms 00:18:49.387 [2024-10-15 13:53:03.106711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.387 [2024-10-15 13:53:03.106799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.387 [2024-10-15 13:53:03.106809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:49.387 [2024-10-15 13:53:03.106818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:49.387 [2024-10-15 13:53:03.106825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.387 [2024-10-15 13:53:03.108113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:49.387 [2024-10-15 13:53:03.111025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 278.669 ms, result 0 00:18:49.387 [2024-10-15 13:53:03.111966] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:49.387 [2024-10-15 13:53:03.125018] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:50.816  [2024-10-15T13:53:05.537Z] Copying: 21/256 [MB] (21 MBps) [2024-10-15T13:53:06.471Z] Copying: 51/256 [MB] (29 MBps) [2024-10-15T13:53:07.404Z] Copying: 71/256 [MB] (19 MBps) [2024-10-15T13:53:08.338Z] Copying: 87/256 [MB] (16 MBps) [2024-10-15T13:53:09.274Z] Copying: 107/256 [MB] (20 MBps) [2024-10-15T13:53:10.240Z] Copying: 127/256 [MB] (20 MBps) [2024-10-15T13:53:11.612Z] Copying: 149/256 [MB] (21 MBps) [2024-10-15T13:53:12.546Z] Copying: 164/256 [MB] (14 MBps) [2024-10-15T13:53:13.479Z] Copying: 176/256 [MB] (12 MBps) [2024-10-15T13:53:14.413Z] Copying: 191/256 [MB] (14 MBps) [2024-10-15T13:53:15.344Z] Copying: 204/256 [MB] (12 MBps) [2024-10-15T13:53:16.277Z] Copying: 215/256 [MB] (11 MBps) [2024-10-15T13:53:17.238Z] Copying: 227/256 [MB] (11 MBps) [2024-10-15T13:53:18.172Z] Copying: 242/256 [MB] (15 MBps) [2024-10-15T13:53:18.172Z] Copying: 256/256 [MB] (average 17 MBps)[2024-10-15 13:53:18.164536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:04.643 [2024-10-15 13:53:18.176342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.643 [2024-10-15 13:53:18.176377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:04.643 [2024-10-15 13:53:18.176390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:04.643 [2024-10-15 13:53:18.176398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.643 [2024-10-15 13:53:18.176420] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:04.643 [2024-10-15 13:53:18.178955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.643 [2024-10-15 13:53:18.178985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:04.643 [2024-10-15 13:53:18.178996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.522 ms 00:19:04.643 [2024-10-15 13:53:18.179004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.643 [2024-10-15 13:53:18.179281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.643 [2024-10-15 13:53:18.179292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:04.643 [2024-10-15 13:53:18.179301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:19:04.643 [2024-10-15 13:53:18.179308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.643 [2024-10-15 13:53:18.182996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.643 [2024-10-15 13:53:18.183019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:04.643 [2024-10-15 13:53:18.183029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.674 ms 00:19:04.643 [2024-10-15 13:53:18.183041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.643 [2024-10-15 13:53:18.189969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.643 [2024-10-15 13:53:18.190097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:04.643 [2024-10-15 13:53:18.190113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.910 ms 00:19:04.643 [2024-10-15 13:53:18.190121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.643 [2024-10-15 13:53:18.212841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.643 [2024-10-15 13:53:18.212963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:04.643 [2024-10-15 13:53:18.212980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.656 ms 00:19:04.643 [2024-10-15 13:53:18.212987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.643 [2024-10-15 13:53:18.226531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.643 [2024-10-15 13:53:18.226561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:04.643 [2024-10-15 13:53:18.226572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.512 ms 00:19:04.643 [2024-10-15 13:53:18.226584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.643 [2024-10-15 13:53:18.226721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.643 [2024-10-15 13:53:18.226731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:04.643 [2024-10-15 13:53:18.226740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:04.643 [2024-10-15 13:53:18.226747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.643 [2024-10-15 13:53:18.249402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.643 [2024-10-15 13:53:18.249431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:04.643 [2024-10-15 13:53:18.249441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.632 ms 00:19:04.643 [2024-10-15 13:53:18.249448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.643 [2024-10-15 13:53:18.272028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.643 [2024-10-15 13:53:18.272144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:04.643 [2024-10-15 13:53:18.272159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.547 ms 00:19:04.643 [2024-10-15 13:53:18.272166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.643 [2024-10-15 13:53:18.294747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.643 [2024-10-15 13:53:18.294857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:04.643 [2024-10-15 13:53:18.294871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.536 ms 00:19:04.643 [2024-10-15 13:53:18.294879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.643 [2024-10-15 13:53:18.317129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.643 [2024-10-15 13:53:18.317158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:04.643 [2024-10-15 13:53:18.317168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.196 ms 00:19:04.643 [2024-10-15 13:53:18.317175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.643 [2024-10-15 13:53:18.317207] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:04.643 [2024-10-15 13:53:18.317231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:04.643 [2024-10-15 13:53:18.317452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.317997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:04.644 [2024-10-15 13:53:18.318011] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:04.644 [2024-10-15 13:53:18.318019] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f658c17-becd-47e5-8bb1-b864f80d9f09 00:19:04.644 [2024-10-15 13:53:18.318027] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:04.644 [2024-10-15 13:53:18.318034] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:04.644 [2024-10-15 13:53:18.318041] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:04.644 [2024-10-15 13:53:18.318049] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:04.644 [2024-10-15 13:53:18.318056] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:04.644 [2024-10-15 13:53:18.318063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:04.645 [2024-10-15 13:53:18.318072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:04.645 [2024-10-15 13:53:18.318079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:04.645 [2024-10-15 13:53:18.318086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:04.645 [2024-10-15 13:53:18.318093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.645 [2024-10-15 13:53:18.318100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:04.645 [2024-10-15 13:53:18.318109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:19:04.645 [2024-10-15 13:53:18.318118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.645 [2024-10-15 13:53:18.330194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.645 [2024-10-15 13:53:18.330239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:04.645 [2024-10-15 13:53:18.330249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.059 ms 00:19:04.645 [2024-10-15 13:53:18.330257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.645 [2024-10-15 13:53:18.330603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.645 [2024-10-15 13:53:18.330618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:04.645 [2024-10-15 13:53:18.330630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:04.645 [2024-10-15 13:53:18.330637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.645 [2024-10-15 13:53:18.365458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.645 [2024-10-15 13:53:18.365494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.645 [2024-10-15 13:53:18.365505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.645 [2024-10-15 13:53:18.365512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.645 [2024-10-15 13:53:18.365586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.645 [2024-10-15 13:53:18.365595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.645 [2024-10-15 13:53:18.365605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.645 [2024-10-15 13:53:18.365612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.645 [2024-10-15 13:53:18.365652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.645 [2024-10-15 13:53:18.365660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.645 [2024-10-15 13:53:18.365668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.645 [2024-10-15 13:53:18.365675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.645 [2024-10-15 13:53:18.365692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.645 [2024-10-15 13:53:18.365699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.645 [2024-10-15 13:53:18.365706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.645 [2024-10-15 13:53:18.365716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.903 [2024-10-15 13:53:18.442649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.903 [2024-10-15 13:53:18.442688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.903 [2024-10-15 13:53:18.442699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.903 [2024-10-15 13:53:18.442706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.903 [2024-10-15 13:53:18.504461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.903 [2024-10-15 13:53:18.504501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.903 [2024-10-15 13:53:18.504515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.903 [2024-10-15 13:53:18.504523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.903 [2024-10-15 13:53:18.504571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.903 [2024-10-15 13:53:18.504579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.903 [2024-10-15 13:53:18.504587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.903 [2024-10-15 13:53:18.504595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.903 [2024-10-15 13:53:18.504622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.903 [2024-10-15 13:53:18.504629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.903 [2024-10-15 13:53:18.504637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.903 [2024-10-15 13:53:18.504644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.903 [2024-10-15 13:53:18.504730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.903 [2024-10-15 13:53:18.504740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.903 [2024-10-15 13:53:18.504748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.903 [2024-10-15 13:53:18.504755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.903 [2024-10-15 13:53:18.504782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.903 [2024-10-15 13:53:18.504791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:04.903 [2024-10-15 13:53:18.504798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.904 [2024-10-15 13:53:18.504806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.904 [2024-10-15 13:53:18.504844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.904 [2024-10-15 13:53:18.504852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.904 [2024-10-15 13:53:18.504860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.904 [2024-10-15 13:53:18.504866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.904 [2024-10-15 13:53:18.504907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.904 [2024-10-15 13:53:18.504915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.904 [2024-10-15 13:53:18.504923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.904 [2024-10-15 13:53:18.504930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.904 [2024-10-15 13:53:18.505059] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 328.713 ms, result 0 00:19:05.470 00:19:05.470 00:19:05.470 13:53:19 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:06.035 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:06.035 13:53:19 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:06.035 13:53:19 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:06.035 13:53:19 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:06.035 13:53:19 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:06.035 13:53:19 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:06.035 13:53:19 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:06.035 Process with pid 74103 is not found 00:19:06.035 13:53:19 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74103 00:19:06.035 13:53:19 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74103 ']' 00:19:06.035 13:53:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74103 00:19:06.035 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74103) - No such process 00:19:06.035 13:53:19 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 74103 is not found' 00:19:06.035 ************************************ 00:19:06.035 END TEST ftl_trim 00:19:06.035 ************************************ 00:19:06.035 00:19:06.035 real 1m5.028s 00:19:06.035 user 1m32.860s 00:19:06.035 sys 0m5.308s 00:19:06.035 13:53:19 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.035 13:53:19 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:06.293 13:53:19 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:06.293 13:53:19 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:06.293 13:53:19 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:06.293 13:53:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:06.293 ************************************ 00:19:06.293 START TEST ftl_restore 00:19:06.293 ************************************ 00:19:06.293 13:53:19 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:06.293 * Looking for test storage... 00:19:06.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.293 13:53:19 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:06.293 13:53:19 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:19:06.293 13:53:19 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:06.293 13:53:19 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:06.293 13:53:19 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.293 13:53:19 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.293 13:53:19 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.293 13:53:19 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.293 13:53:19 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:06.294 13:53:19 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.294 13:53:20 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.294 13:53:20 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.294 13:53:20 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:06.294 13:53:20 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.294 13:53:20 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:06.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.294 --rc genhtml_branch_coverage=1 00:19:06.294 --rc genhtml_function_coverage=1 00:19:06.294 --rc genhtml_legend=1 00:19:06.294 --rc geninfo_all_blocks=1 00:19:06.294 --rc geninfo_unexecuted_blocks=1 00:19:06.294 00:19:06.294 ' 00:19:06.294 13:53:20 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:06.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.294 --rc genhtml_branch_coverage=1 00:19:06.294 --rc genhtml_function_coverage=1 00:19:06.294 --rc genhtml_legend=1 00:19:06.294 --rc geninfo_all_blocks=1 00:19:06.294 --rc geninfo_unexecuted_blocks=1 00:19:06.294 00:19:06.294 ' 00:19:06.294 13:53:20 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:06.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.294 --rc genhtml_branch_coverage=1 00:19:06.294 --rc genhtml_function_coverage=1 00:19:06.294 --rc genhtml_legend=1 00:19:06.294 --rc geninfo_all_blocks=1 00:19:06.294 --rc geninfo_unexecuted_blocks=1 00:19:06.294 00:19:06.294 ' 00:19:06.294 13:53:20 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:06.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.294 --rc genhtml_branch_coverage=1 00:19:06.294 --rc genhtml_function_coverage=1 00:19:06.294 --rc genhtml_legend=1 00:19:06.294 --rc geninfo_all_blocks=1 00:19:06.294 --rc geninfo_unexecuted_blocks=1 00:19:06.294 00:19:06.294 ' 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.jqmKxxmxK6 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74408 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74408 00:19:06.294 13:53:20 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 74408 ']' 00:19:06.294 13:53:20 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.294 13:53:20 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:06.294 13:53:20 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.294 13:53:20 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:06.294 13:53:20 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:06.294 13:53:20 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.552 [2024-10-15 13:53:20.100779] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:19:06.552 [2024-10-15 13:53:20.101733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74408 ] 00:19:06.552 [2024-10-15 13:53:20.251734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.810 [2024-10-15 13:53:20.348239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.376 13:53:20 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.376 13:53:20 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:19:07.376 13:53:20 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:07.376 13:53:20 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:07.376 13:53:20 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:07.376 13:53:20 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:07.376 13:53:20 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:07.376 13:53:20 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:07.635 13:53:21 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:07.635 13:53:21 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:07.635 13:53:21 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:07.635 13:53:21 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:07.635 13:53:21 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:07.635 13:53:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:07.635 13:53:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:07.635 13:53:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:07.893 13:53:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:07.893 { 00:19:07.893 "name": "nvme0n1", 00:19:07.893 "aliases": [ 00:19:07.893 "ee050e1c-54ce-47a0-a429-85fa292ef33f" 00:19:07.893 ], 00:19:07.893 "product_name": "NVMe disk", 00:19:07.893 "block_size": 4096, 00:19:07.893 "num_blocks": 1310720, 00:19:07.893 "uuid": "ee050e1c-54ce-47a0-a429-85fa292ef33f", 00:19:07.893 "numa_id": -1, 00:19:07.893 "assigned_rate_limits": { 00:19:07.893 "rw_ios_per_sec": 0, 00:19:07.893 "rw_mbytes_per_sec": 0, 00:19:07.893 "r_mbytes_per_sec": 0, 00:19:07.893 "w_mbytes_per_sec": 0 00:19:07.893 }, 00:19:07.893 "claimed": true, 00:19:07.893 "claim_type": "read_many_write_one", 00:19:07.893 "zoned": false, 00:19:07.893 "supported_io_types": { 00:19:07.893 "read": true, 00:19:07.893 "write": true, 00:19:07.893 "unmap": true, 00:19:07.893 "flush": true, 00:19:07.893 "reset": true, 00:19:07.893 "nvme_admin": true, 00:19:07.893 "nvme_io": true, 00:19:07.893 "nvme_io_md": false, 00:19:07.893 "write_zeroes": true, 00:19:07.893 "zcopy": false, 00:19:07.893 "get_zone_info": false, 00:19:07.893 "zone_management": false, 00:19:07.893 "zone_append": false, 00:19:07.893 "compare": true, 00:19:07.893 "compare_and_write": false, 00:19:07.893 "abort": true, 00:19:07.893 "seek_hole": false, 00:19:07.893 "seek_data": false, 00:19:07.893 "copy": true, 00:19:07.893 "nvme_iov_md": false 00:19:07.893 }, 00:19:07.893 "driver_specific": { 00:19:07.893 "nvme": [ 00:19:07.893 { 00:19:07.893 "pci_address": "0000:00:11.0", 00:19:07.894 "trid": { 00:19:07.894 "trtype": "PCIe", 00:19:07.894 "traddr": "0000:00:11.0" 00:19:07.894 }, 00:19:07.894 "ctrlr_data": { 00:19:07.894 "cntlid": 0, 00:19:07.894 "vendor_id": "0x1b36", 00:19:07.894 "model_number": "QEMU NVMe Ctrl", 00:19:07.894 "serial_number": "12341", 00:19:07.894 "firmware_revision": "8.0.0", 00:19:07.894 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:07.894 "oacs": { 00:19:07.894 "security": 0, 00:19:07.894 "format": 1, 00:19:07.894 "firmware": 0, 00:19:07.894 "ns_manage": 1 00:19:07.894 }, 00:19:07.894 "multi_ctrlr": false, 00:19:07.894 "ana_reporting": false 00:19:07.894 }, 00:19:07.894 "vs": { 00:19:07.894 "nvme_version": "1.4" 00:19:07.894 }, 00:19:07.894 "ns_data": { 00:19:07.894 "id": 1, 00:19:07.894 "can_share": false 00:19:07.894 } 00:19:07.894 } 00:19:07.894 ], 00:19:07.894 "mp_policy": "active_passive" 00:19:07.894 } 00:19:07.894 } 00:19:07.894 ]' 00:19:07.894 13:53:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:07.894 13:53:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:07.894 13:53:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:07.894 13:53:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:07.894 13:53:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:07.894 13:53:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:19:07.894 13:53:21 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:07.894 13:53:21 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:07.894 13:53:21 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:07.894 13:53:21 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:07.894 13:53:21 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:08.152 13:53:21 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=7713ee78-7a7e-4418-b50d-a86f814204dd 00:19:08.152 13:53:21 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:08.152 13:53:21 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7713ee78-7a7e-4418-b50d-a86f814204dd 00:19:08.152 13:53:21 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:08.410 13:53:22 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=64cd6279-e6ee-48ed-a131-59553f91ad33 00:19:08.410 13:53:22 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 64cd6279-e6ee-48ed-a131-59553f91ad33 00:19:08.669 13:53:22 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=e0c4b0e1-7881-4150-b396-d7c43f27dc62 00:19:08.669 13:53:22 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:08.669 13:53:22 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e0c4b0e1-7881-4150-b396-d7c43f27dc62 00:19:08.669 13:53:22 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:08.669 13:53:22 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:08.669 13:53:22 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=e0c4b0e1-7881-4150-b396-d7c43f27dc62 00:19:08.669 13:53:22 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:08.669 13:53:22 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size e0c4b0e1-7881-4150-b396-d7c43f27dc62 00:19:08.669 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e0c4b0e1-7881-4150-b396-d7c43f27dc62 00:19:08.669 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:08.669 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:08.669 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:08.669 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e0c4b0e1-7881-4150-b396-d7c43f27dc62 00:19:08.930 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:08.930 { 00:19:08.930 "name": "e0c4b0e1-7881-4150-b396-d7c43f27dc62", 00:19:08.930 "aliases": [ 00:19:08.930 "lvs/nvme0n1p0" 00:19:08.930 ], 00:19:08.930 "product_name": "Logical Volume", 00:19:08.930 "block_size": 4096, 00:19:08.930 "num_blocks": 26476544, 00:19:08.930 "uuid": "e0c4b0e1-7881-4150-b396-d7c43f27dc62", 00:19:08.930 "assigned_rate_limits": { 00:19:08.930 "rw_ios_per_sec": 0, 00:19:08.930 "rw_mbytes_per_sec": 0, 00:19:08.930 "r_mbytes_per_sec": 0, 00:19:08.930 "w_mbytes_per_sec": 0 00:19:08.930 }, 00:19:08.930 "claimed": false, 00:19:08.930 "zoned": false, 00:19:08.930 "supported_io_types": { 00:19:08.930 "read": true, 00:19:08.930 "write": true, 00:19:08.930 "unmap": true, 00:19:08.930 "flush": false, 00:19:08.930 "reset": true, 00:19:08.930 "nvme_admin": false, 00:19:08.930 "nvme_io": false, 00:19:08.930 "nvme_io_md": false, 00:19:08.930 "write_zeroes": true, 00:19:08.930 "zcopy": false, 00:19:08.930 "get_zone_info": false, 00:19:08.930 "zone_management": false, 00:19:08.930 "zone_append": false, 00:19:08.930 "compare": false, 00:19:08.930 "compare_and_write": false, 00:19:08.930 "abort": false, 00:19:08.930 "seek_hole": true, 00:19:08.930 "seek_data": true, 00:19:08.930 "copy": false, 00:19:08.930 "nvme_iov_md": false 00:19:08.930 }, 00:19:08.930 "driver_specific": { 00:19:08.930 "lvol": { 00:19:08.930 "lvol_store_uuid": "64cd6279-e6ee-48ed-a131-59553f91ad33", 00:19:08.930 "base_bdev": "nvme0n1", 00:19:08.930 "thin_provision": true, 00:19:08.930 "num_allocated_clusters": 0, 00:19:08.930 "snapshot": false, 00:19:08.930 "clone": false, 00:19:08.930 "esnap_clone": false 00:19:08.930 } 00:19:08.930 } 00:19:08.930 } 00:19:08.930 ]' 00:19:08.930 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:08.930 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:08.930 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:08.930 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:08.930 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:08.930 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:08.930 13:53:22 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:08.930 13:53:22 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:08.930 13:53:22 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:09.192 13:53:22 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:09.192 13:53:22 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:09.192 13:53:22 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size e0c4b0e1-7881-4150-b396-d7c43f27dc62 00:19:09.192 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e0c4b0e1-7881-4150-b396-d7c43f27dc62 00:19:09.192 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:09.192 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:09.192 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:09.192 13:53:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e0c4b0e1-7881-4150-b396-d7c43f27dc62 00:19:09.452 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:09.452 { 00:19:09.452 "name": "e0c4b0e1-7881-4150-b396-d7c43f27dc62", 00:19:09.452 "aliases": [ 00:19:09.452 "lvs/nvme0n1p0" 00:19:09.452 ], 00:19:09.452 "product_name": "Logical Volume", 00:19:09.452 "block_size": 4096, 00:19:09.452 "num_blocks": 26476544, 00:19:09.452 "uuid": "e0c4b0e1-7881-4150-b396-d7c43f27dc62", 00:19:09.452 "assigned_rate_limits": { 00:19:09.452 "rw_ios_per_sec": 0, 00:19:09.452 "rw_mbytes_per_sec": 0, 00:19:09.452 "r_mbytes_per_sec": 0, 00:19:09.452 "w_mbytes_per_sec": 0 00:19:09.452 }, 00:19:09.452 "claimed": false, 00:19:09.452 "zoned": false, 00:19:09.452 "supported_io_types": { 00:19:09.452 "read": true, 00:19:09.452 "write": true, 00:19:09.452 "unmap": true, 00:19:09.452 "flush": false, 00:19:09.452 "reset": true, 00:19:09.452 "nvme_admin": false, 00:19:09.452 "nvme_io": false, 00:19:09.452 "nvme_io_md": false, 00:19:09.452 "write_zeroes": true, 00:19:09.452 "zcopy": false, 00:19:09.452 "get_zone_info": false, 00:19:09.452 "zone_management": false, 00:19:09.452 "zone_append": false, 00:19:09.452 "compare": false, 00:19:09.452 "compare_and_write": false, 00:19:09.452 "abort": false, 00:19:09.452 "seek_hole": true, 00:19:09.452 "seek_data": true, 00:19:09.452 "copy": false, 00:19:09.452 "nvme_iov_md": false 00:19:09.452 }, 00:19:09.452 "driver_specific": { 00:19:09.452 "lvol": { 00:19:09.452 "lvol_store_uuid": "64cd6279-e6ee-48ed-a131-59553f91ad33", 00:19:09.452 "base_bdev": "nvme0n1", 00:19:09.452 "thin_provision": true, 00:19:09.452 "num_allocated_clusters": 0, 00:19:09.452 "snapshot": false, 00:19:09.452 "clone": false, 00:19:09.452 "esnap_clone": false 00:19:09.452 } 00:19:09.452 } 00:19:09.452 } 00:19:09.452 ]' 00:19:09.452 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:09.452 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:09.452 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:09.452 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:09.452 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:09.452 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:09.452 13:53:23 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:09.452 13:53:23 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:09.713 13:53:23 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:09.713 13:53:23 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size e0c4b0e1-7881-4150-b396-d7c43f27dc62 00:19:09.713 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e0c4b0e1-7881-4150-b396-d7c43f27dc62 00:19:09.713 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:09.713 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:09.713 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:09.713 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e0c4b0e1-7881-4150-b396-d7c43f27dc62 00:19:09.974 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:09.974 { 00:19:09.974 "name": "e0c4b0e1-7881-4150-b396-d7c43f27dc62", 00:19:09.974 "aliases": [ 00:19:09.974 "lvs/nvme0n1p0" 00:19:09.974 ], 00:19:09.974 "product_name": "Logical Volume", 00:19:09.974 "block_size": 4096, 00:19:09.974 "num_blocks": 26476544, 00:19:09.974 "uuid": "e0c4b0e1-7881-4150-b396-d7c43f27dc62", 00:19:09.974 "assigned_rate_limits": { 00:19:09.974 "rw_ios_per_sec": 0, 00:19:09.974 "rw_mbytes_per_sec": 0, 00:19:09.974 "r_mbytes_per_sec": 0, 00:19:09.974 "w_mbytes_per_sec": 0 00:19:09.974 }, 00:19:09.974 "claimed": false, 00:19:09.974 "zoned": false, 00:19:09.974 "supported_io_types": { 00:19:09.974 "read": true, 00:19:09.974 "write": true, 00:19:09.974 "unmap": true, 00:19:09.974 "flush": false, 00:19:09.974 "reset": true, 00:19:09.974 "nvme_admin": false, 00:19:09.974 "nvme_io": false, 00:19:09.974 "nvme_io_md": false, 00:19:09.974 "write_zeroes": true, 00:19:09.974 "zcopy": false, 00:19:09.974 "get_zone_info": false, 00:19:09.974 "zone_management": false, 00:19:09.974 "zone_append": false, 00:19:09.974 "compare": false, 00:19:09.974 "compare_and_write": false, 00:19:09.974 "abort": false, 00:19:09.974 "seek_hole": true, 00:19:09.974 "seek_data": true, 00:19:09.974 "copy": false, 00:19:09.974 "nvme_iov_md": false 00:19:09.974 }, 00:19:09.974 "driver_specific": { 00:19:09.974 "lvol": { 00:19:09.974 "lvol_store_uuid": "64cd6279-e6ee-48ed-a131-59553f91ad33", 00:19:09.974 "base_bdev": "nvme0n1", 00:19:09.974 "thin_provision": true, 00:19:09.974 "num_allocated_clusters": 0, 00:19:09.974 "snapshot": false, 00:19:09.974 "clone": false, 00:19:09.974 "esnap_clone": false 00:19:09.974 } 00:19:09.974 } 00:19:09.974 } 00:19:09.974 ]' 00:19:09.974 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:09.974 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:09.974 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:09.974 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:09.975 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:09.975 13:53:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:09.975 13:53:23 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:09.975 13:53:23 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e0c4b0e1-7881-4150-b396-d7c43f27dc62 --l2p_dram_limit 10' 00:19:09.975 13:53:23 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:09.975 13:53:23 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:09.975 13:53:23 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:09.975 13:53:23 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:09.975 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:09.975 13:53:23 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e0c4b0e1-7881-4150-b396-d7c43f27dc62 --l2p_dram_limit 10 -c nvc0n1p0 00:19:10.236 [2024-10-15 13:53:23.873033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.236 [2024-10-15 13:53:23.873535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:10.236 [2024-10-15 13:53:23.873575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:10.236 [2024-10-15 13:53:23.873586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.236 [2024-10-15 13:53:23.873681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.236 [2024-10-15 13:53:23.873695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:10.236 [2024-10-15 13:53:23.873707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:10.236 [2024-10-15 13:53:23.873715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.237 [2024-10-15 13:53:23.873747] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:10.237 [2024-10-15 13:53:23.874604] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:10.237 [2024-10-15 13:53:23.874638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.237 [2024-10-15 13:53:23.874647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:10.237 [2024-10-15 13:53:23.874658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:19:10.237 [2024-10-15 13:53:23.874666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.237 [2024-10-15 13:53:23.874757] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e8526d0e-2f90-448e-896b-b14df4f43d8b 00:19:10.237 [2024-10-15 13:53:23.876620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.237 [2024-10-15 13:53:23.876677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:10.237 [2024-10-15 13:53:23.876689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:10.237 [2024-10-15 13:53:23.876705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.237 [2024-10-15 13:53:23.885564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.237 [2024-10-15 13:53:23.885618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:10.237 [2024-10-15 13:53:23.885628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.781 ms 00:19:10.237 [2024-10-15 13:53:23.885638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.237 [2024-10-15 13:53:23.885735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.237 [2024-10-15 13:53:23.885748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:10.237 [2024-10-15 13:53:23.885757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:10.237 [2024-10-15 13:53:23.885769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.237 [2024-10-15 13:53:23.885837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.237 [2024-10-15 13:53:23.885847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:10.237 [2024-10-15 13:53:23.885855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:10.237 [2024-10-15 13:53:23.885863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.237 [2024-10-15 13:53:23.885884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:10.237 [2024-10-15 13:53:23.889767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.237 [2024-10-15 13:53:23.889804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:10.237 [2024-10-15 13:53:23.889815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.886 ms 00:19:10.237 [2024-10-15 13:53:23.889827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.237 [2024-10-15 13:53:23.889865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.237 [2024-10-15 13:53:23.889874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:10.237 [2024-10-15 13:53:23.889884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:10.237 [2024-10-15 13:53:23.889891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.237 [2024-10-15 13:53:23.889931] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:10.237 [2024-10-15 13:53:23.890050] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:10.237 [2024-10-15 13:53:23.890066] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:10.237 [2024-10-15 13:53:23.890075] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:10.237 [2024-10-15 13:53:23.890087] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:10.237 [2024-10-15 13:53:23.890095] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:10.237 [2024-10-15 13:53:23.890105] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:10.237 [2024-10-15 13:53:23.890111] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:10.237 [2024-10-15 13:53:23.890120] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:10.237 [2024-10-15 13:53:23.890126] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:10.237 [2024-10-15 13:53:23.890135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.237 [2024-10-15 13:53:23.890143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:10.237 [2024-10-15 13:53:23.890152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:19:10.237 [2024-10-15 13:53:23.890165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.237 [2024-10-15 13:53:23.890259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.237 [2024-10-15 13:53:23.890269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:10.237 [2024-10-15 13:53:23.890278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:10.237 [2024-10-15 13:53:23.890285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.237 [2024-10-15 13:53:23.890364] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:10.237 [2024-10-15 13:53:23.890373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:10.237 [2024-10-15 13:53:23.890384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.237 [2024-10-15 13:53:23.890390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.237 [2024-10-15 13:53:23.890398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:10.237 [2024-10-15 13:53:23.890407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:10.237 [2024-10-15 13:53:23.890414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:10.237 [2024-10-15 13:53:23.890419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:10.237 [2024-10-15 13:53:23.890426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:10.237 [2024-10-15 13:53:23.890431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.237 [2024-10-15 13:53:23.890439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:10.237 [2024-10-15 13:53:23.890446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:10.237 [2024-10-15 13:53:23.890453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.237 [2024-10-15 13:53:23.890459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:10.237 [2024-10-15 13:53:23.890467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:10.237 [2024-10-15 13:53:23.890474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.237 [2024-10-15 13:53:23.890483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:10.237 [2024-10-15 13:53:23.890489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:10.237 [2024-10-15 13:53:23.890496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.237 [2024-10-15 13:53:23.890502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:10.237 [2024-10-15 13:53:23.890514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:10.237 [2024-10-15 13:53:23.890521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.237 [2024-10-15 13:53:23.890530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:10.237 [2024-10-15 13:53:23.890536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:10.237 [2024-10-15 13:53:23.890542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.237 [2024-10-15 13:53:23.890548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:10.237 [2024-10-15 13:53:23.890556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:10.237 [2024-10-15 13:53:23.890561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.237 [2024-10-15 13:53:23.890569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:10.237 [2024-10-15 13:53:23.890574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:10.237 [2024-10-15 13:53:23.890581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.237 [2024-10-15 13:53:23.890586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:10.237 [2024-10-15 13:53:23.890595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:10.237 [2024-10-15 13:53:23.890601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.237 [2024-10-15 13:53:23.890608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:10.237 [2024-10-15 13:53:23.890613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:10.237 [2024-10-15 13:53:23.890620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.237 [2024-10-15 13:53:23.890625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:10.237 [2024-10-15 13:53:23.890631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:10.237 [2024-10-15 13:53:23.890636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.237 [2024-10-15 13:53:23.890643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:10.237 [2024-10-15 13:53:23.890649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:10.237 [2024-10-15 13:53:23.890656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.237 [2024-10-15 13:53:23.890661] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:10.237 [2024-10-15 13:53:23.890668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:10.237 [2024-10-15 13:53:23.890674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.237 [2024-10-15 13:53:23.890683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.237 [2024-10-15 13:53:23.890689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:10.237 [2024-10-15 13:53:23.890699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:10.237 [2024-10-15 13:53:23.890704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:10.237 [2024-10-15 13:53:23.890711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:10.237 [2024-10-15 13:53:23.890717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:10.237 [2024-10-15 13:53:23.890724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:10.237 [2024-10-15 13:53:23.890732] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:10.237 [2024-10-15 13:53:23.890745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.237 [2024-10-15 13:53:23.890754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:10.237 [2024-10-15 13:53:23.890762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:10.237 [2024-10-15 13:53:23.890767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:10.237 [2024-10-15 13:53:23.890775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:10.238 [2024-10-15 13:53:23.890780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:10.238 [2024-10-15 13:53:23.890788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:10.238 [2024-10-15 13:53:23.890794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:10.238 [2024-10-15 13:53:23.890802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:10.238 [2024-10-15 13:53:23.890808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:10.238 [2024-10-15 13:53:23.890817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:10.238 [2024-10-15 13:53:23.890823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:10.238 [2024-10-15 13:53:23.890831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:10.238 [2024-10-15 13:53:23.890836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:10.238 [2024-10-15 13:53:23.890844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:10.238 [2024-10-15 13:53:23.890849] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:10.238 [2024-10-15 13:53:23.890859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.238 [2024-10-15 13:53:23.890868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:10.238 [2024-10-15 13:53:23.890875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:10.238 [2024-10-15 13:53:23.890881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:10.238 [2024-10-15 13:53:23.890889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:10.238 [2024-10-15 13:53:23.890895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.238 [2024-10-15 13:53:23.890904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:10.238 [2024-10-15 13:53:23.890909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:19:10.238 [2024-10-15 13:53:23.890917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.238 [2024-10-15 13:53:23.890948] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:10.238 [2024-10-15 13:53:23.890960] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:14.434 [2024-10-15 13:53:27.386788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.386851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:14.434 [2024-10-15 13:53:27.386867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3495.824 ms 00:19:14.434 [2024-10-15 13:53:27.386878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.412885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.412933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.434 [2024-10-15 13:53:27.412946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.804 ms 00:19:14.434 [2024-10-15 13:53:27.412956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.413076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.413090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:14.434 [2024-10-15 13:53:27.413098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:14.434 [2024-10-15 13:53:27.413110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.443693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.443731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.434 [2024-10-15 13:53:27.443741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.535 ms 00:19:14.434 [2024-10-15 13:53:27.443750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.443778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.443789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.434 [2024-10-15 13:53:27.443798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:14.434 [2024-10-15 13:53:27.443809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.444163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.444181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.434 [2024-10-15 13:53:27.444190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:19:14.434 [2024-10-15 13:53:27.444200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.444320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.444332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.434 [2024-10-15 13:53:27.444355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:14.434 [2024-10-15 13:53:27.444367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.458259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.458291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.434 [2024-10-15 13:53:27.458300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.873 ms 00:19:14.434 [2024-10-15 13:53:27.458311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.469517] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:14.434 [2024-10-15 13:53:27.472165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.472356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:14.434 [2024-10-15 13:53:27.472378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.789 ms 00:19:14.434 [2024-10-15 13:53:27.472387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.554522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.554569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:14.434 [2024-10-15 13:53:27.554586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.105 ms 00:19:14.434 [2024-10-15 13:53:27.554594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.554774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.554786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:14.434 [2024-10-15 13:53:27.554799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:19:14.434 [2024-10-15 13:53:27.554810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.577912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.578094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:14.434 [2024-10-15 13:53:27.578117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.055 ms 00:19:14.434 [2024-10-15 13:53:27.578126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.601332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.601445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:14.434 [2024-10-15 13:53:27.601518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.137 ms 00:19:14.434 [2024-10-15 13:53:27.601538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.602111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.602195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:14.434 [2024-10-15 13:53:27.602573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:19:14.434 [2024-10-15 13:53:27.602664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.676064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.676202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:14.434 [2024-10-15 13:53:27.676280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.273 ms 00:19:14.434 [2024-10-15 13:53:27.676305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.700821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.700936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:14.434 [2024-10-15 13:53:27.700994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.386 ms 00:19:14.434 [2024-10-15 13:53:27.701016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.724476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-10-15 13:53:27.724584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:14.434 [2024-10-15 13:53:27.724637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.376 ms 00:19:14.434 [2024-10-15 13:53:27.724658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-10-15 13:53:27.748096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-10-15 13:53:27.748204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:14.435 [2024-10-15 13:53:27.748277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.346 ms 00:19:14.435 [2024-10-15 13:53:27.748299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-10-15 13:53:27.748346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-10-15 13:53:27.748369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:14.435 [2024-10-15 13:53:27.748394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:14.435 [2024-10-15 13:53:27.748413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-10-15 13:53:27.748500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-10-15 13:53:27.748615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:14.435 [2024-10-15 13:53:27.748637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:14.435 [2024-10-15 13:53:27.748655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-10-15 13:53:27.749485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3876.036 ms, result 0 00:19:14.435 { 00:19:14.435 "name": "ftl0", 00:19:14.435 "uuid": "e8526d0e-2f90-448e-896b-b14df4f43d8b" 00:19:14.435 } 00:19:14.435 13:53:27 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:14.435 13:53:27 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:14.435 13:53:27 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:14.435 13:53:27 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:14.435 [2024-10-15 13:53:28.168982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-10-15 13:53:28.169140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:14.435 [2024-10-15 13:53:28.169201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:14.435 [2024-10-15 13:53:28.169257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-10-15 13:53:28.169302] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:14.435 [2024-10-15 13:53:28.171933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-10-15 13:53:28.172041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:14.435 [2024-10-15 13:53:28.172111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.588 ms 00:19:14.435 [2024-10-15 13:53:28.172135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-10-15 13:53:28.172601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-10-15 13:53:28.172640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:14.435 [2024-10-15 13:53:28.172707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:14.435 [2024-10-15 13:53:28.172730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-10-15 13:53:28.175995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-10-15 13:53:28.176069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:14.435 [2024-10-15 13:53:28.176085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:19:14.435 [2024-10-15 13:53:28.176095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-10-15 13:53:28.182467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-10-15 13:53:28.182552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:14.435 [2024-10-15 13:53:28.182602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.335 ms 00:19:14.435 [2024-10-15 13:53:28.182623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-10-15 13:53:28.206856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-10-15 13:53:28.206976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:14.435 [2024-10-15 13:53:28.207033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.154 ms 00:19:14.435 [2024-10-15 13:53:28.207056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.694 [2024-10-15 13:53:28.222087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.694 [2024-10-15 13:53:28.222201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:14.694 [2024-10-15 13:53:28.222283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.951 ms 00:19:14.694 [2024-10-15 13:53:28.222307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.694 [2024-10-15 13:53:28.222476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.694 [2024-10-15 13:53:28.222607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:14.694 [2024-10-15 13:53:28.222634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:19:14.694 [2024-10-15 13:53:28.222655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.694 [2024-10-15 13:53:28.245957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.694 [2024-10-15 13:53:28.246065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:14.694 [2024-10-15 13:53:28.246116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.272 ms 00:19:14.694 [2024-10-15 13:53:28.246137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.694 [2024-10-15 13:53:28.269484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.694 [2024-10-15 13:53:28.269591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:14.694 [2024-10-15 13:53:28.269642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.248 ms 00:19:14.694 [2024-10-15 13:53:28.269663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.694 [2024-10-15 13:53:28.292071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.694 [2024-10-15 13:53:28.292175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:14.694 [2024-10-15 13:53:28.292233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.328 ms 00:19:14.694 [2024-10-15 13:53:28.292256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.694 [2024-10-15 13:53:28.314934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.694 [2024-10-15 13:53:28.315040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:14.694 [2024-10-15 13:53:28.315118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.583 ms 00:19:14.694 [2024-10-15 13:53:28.315139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.694 [2024-10-15 13:53:28.315388] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:14.695 [2024-10-15 13:53:28.315420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.315996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.316978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.317990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:14.695 [2024-10-15 13:53:28.318514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:14.696 [2024-10-15 13:53:28.318524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:14.696 [2024-10-15 13:53:28.318532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:14.696 [2024-10-15 13:53:28.318541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:14.696 [2024-10-15 13:53:28.318549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:14.696 [2024-10-15 13:53:28.318557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:14.696 [2024-10-15 13:53:28.318573] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:14.696 [2024-10-15 13:53:28.318583] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8526d0e-2f90-448e-896b-b14df4f43d8b 00:19:14.696 [2024-10-15 13:53:28.318590] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:14.696 [2024-10-15 13:53:28.318600] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:14.696 [2024-10-15 13:53:28.318609] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:14.696 [2024-10-15 13:53:28.318618] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:14.696 [2024-10-15 13:53:28.318625] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:14.696 [2024-10-15 13:53:28.318637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:14.696 [2024-10-15 13:53:28.318644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:14.696 [2024-10-15 13:53:28.318652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:14.696 [2024-10-15 13:53:28.318659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:14.696 [2024-10-15 13:53:28.318668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.696 [2024-10-15 13:53:28.318676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:14.696 [2024-10-15 13:53:28.318686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.283 ms 00:19:14.696 [2024-10-15 13:53:28.318693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.696 [2024-10-15 13:53:28.331160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.696 [2024-10-15 13:53:28.331275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:14.696 [2024-10-15 13:53:28.331328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.431 ms 00:19:14.696 [2024-10-15 13:53:28.331351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.696 [2024-10-15 13:53:28.331719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.696 [2024-10-15 13:53:28.331754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:14.696 [2024-10-15 13:53:28.331821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:19:14.696 [2024-10-15 13:53:28.331843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.696 [2024-10-15 13:53:28.373486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.696 [2024-10-15 13:53:28.373593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.696 [2024-10-15 13:53:28.373645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.696 [2024-10-15 13:53:28.373666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.696 [2024-10-15 13:53:28.373734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.696 [2024-10-15 13:53:28.373755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.696 [2024-10-15 13:53:28.373776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.696 [2024-10-15 13:53:28.373794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.696 [2024-10-15 13:53:28.373878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.696 [2024-10-15 13:53:28.374005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.696 [2024-10-15 13:53:28.374026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.696 [2024-10-15 13:53:28.374044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.696 [2024-10-15 13:53:28.374075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.696 [2024-10-15 13:53:28.374096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.696 [2024-10-15 13:53:28.374116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.696 [2024-10-15 13:53:28.374176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.696 [2024-10-15 13:53:28.449507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.696 [2024-10-15 13:53:28.449641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.696 [2024-10-15 13:53:28.449693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.696 [2024-10-15 13:53:28.449715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.954 [2024-10-15 13:53:28.512146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.954 [2024-10-15 13:53:28.512287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.954 [2024-10-15 13:53:28.512340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.954 [2024-10-15 13:53:28.512362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.954 [2024-10-15 13:53:28.512456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.954 [2024-10-15 13:53:28.512482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.954 [2024-10-15 13:53:28.512504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.954 [2024-10-15 13:53:28.512522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.954 [2024-10-15 13:53:28.512581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.954 [2024-10-15 13:53:28.512604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.954 [2024-10-15 13:53:28.512615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.954 [2024-10-15 13:53:28.512623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.954 [2024-10-15 13:53:28.512717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.954 [2024-10-15 13:53:28.512727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.954 [2024-10-15 13:53:28.512740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.954 [2024-10-15 13:53:28.512747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.954 [2024-10-15 13:53:28.512780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.954 [2024-10-15 13:53:28.512789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:14.954 [2024-10-15 13:53:28.512798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.954 [2024-10-15 13:53:28.512804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.954 [2024-10-15 13:53:28.512840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.954 [2024-10-15 13:53:28.512849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.954 [2024-10-15 13:53:28.512858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.954 [2024-10-15 13:53:28.512867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.954 [2024-10-15 13:53:28.512911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.954 [2024-10-15 13:53:28.512921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.954 [2024-10-15 13:53:28.512931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.954 [2024-10-15 13:53:28.512938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.954 [2024-10-15 13:53:28.513062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.048 ms, result 0 00:19:14.954 true 00:19:14.954 13:53:28 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74408 00:19:14.954 13:53:28 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74408 ']' 00:19:14.954 13:53:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74408 00:19:14.954 13:53:28 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:19:14.954 13:53:28 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.954 13:53:28 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74408 00:19:14.954 killing process with pid 74408 00:19:14.954 13:53:28 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:14.954 13:53:28 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:14.954 13:53:28 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74408' 00:19:14.954 13:53:28 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 74408 00:19:14.954 13:53:28 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 74408 00:19:21.524 13:53:34 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:25.718 262144+0 records in 00:19:25.718 262144+0 records out 00:19:25.718 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.34234 s, 247 MB/s 00:19:25.718 13:53:38 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:27.619 13:53:41 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:27.619 [2024-10-15 13:53:41.138277] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:19:27.619 [2024-10-15 13:53:41.138417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74645 ] 00:19:27.619 [2024-10-15 13:53:41.290316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.619 [2024-10-15 13:53:41.389649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.877 [2024-10-15 13:53:41.640723] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:27.877 [2024-10-15 13:53:41.640902] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:28.137 [2024-10-15 13:53:41.794975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.137 [2024-10-15 13:53:41.795136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:28.137 [2024-10-15 13:53:41.795155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:28.137 [2024-10-15 13:53:41.795169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.137 [2024-10-15 13:53:41.795217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.137 [2024-10-15 13:53:41.795245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.137 [2024-10-15 13:53:41.795254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:28.137 [2024-10-15 13:53:41.795263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.137 [2024-10-15 13:53:41.795283] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:28.137 [2024-10-15 13:53:41.795967] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:28.137 [2024-10-15 13:53:41.795997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.137 [2024-10-15 13:53:41.796008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.137 [2024-10-15 13:53:41.796017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:19:28.137 [2024-10-15 13:53:41.796024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.137 [2024-10-15 13:53:41.797046] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:28.137 [2024-10-15 13:53:41.809197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.137 [2024-10-15 13:53:41.809236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:28.137 [2024-10-15 13:53:41.809248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.153 ms 00:19:28.137 [2024-10-15 13:53:41.809256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.137 [2024-10-15 13:53:41.809305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.137 [2024-10-15 13:53:41.809315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:28.137 [2024-10-15 13:53:41.809325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:28.137 [2024-10-15 13:53:41.809332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.137 [2024-10-15 13:53:41.814250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.137 [2024-10-15 13:53:41.814277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.137 [2024-10-15 13:53:41.814287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.862 ms 00:19:28.137 [2024-10-15 13:53:41.814294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.137 [2024-10-15 13:53:41.814368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.137 [2024-10-15 13:53:41.814378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.137 [2024-10-15 13:53:41.814386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:28.137 [2024-10-15 13:53:41.814394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.137 [2024-10-15 13:53:41.814430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.137 [2024-10-15 13:53:41.814439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:28.137 [2024-10-15 13:53:41.814449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:28.137 [2024-10-15 13:53:41.814457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.137 [2024-10-15 13:53:41.814476] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:28.137 [2024-10-15 13:53:41.817845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.137 [2024-10-15 13:53:41.817872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.137 [2024-10-15 13:53:41.817882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.373 ms 00:19:28.137 [2024-10-15 13:53:41.817889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.137 [2024-10-15 13:53:41.817918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.137 [2024-10-15 13:53:41.817928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:28.137 [2024-10-15 13:53:41.817935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:28.137 [2024-10-15 13:53:41.817943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.137 [2024-10-15 13:53:41.817962] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:28.137 [2024-10-15 13:53:41.817981] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:28.137 [2024-10-15 13:53:41.818015] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:28.137 [2024-10-15 13:53:41.818032] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:28.137 [2024-10-15 13:53:41.818134] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:28.137 [2024-10-15 13:53:41.818144] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:28.137 [2024-10-15 13:53:41.818154] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:28.137 [2024-10-15 13:53:41.818165] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:28.137 [2024-10-15 13:53:41.818174] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:28.137 [2024-10-15 13:53:41.818182] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:28.137 [2024-10-15 13:53:41.818189] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:28.137 [2024-10-15 13:53:41.818197] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:28.137 [2024-10-15 13:53:41.818204] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:28.137 [2024-10-15 13:53:41.818212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.137 [2024-10-15 13:53:41.818242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:28.137 [2024-10-15 13:53:41.818251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:19:28.137 [2024-10-15 13:53:41.818258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.137 [2024-10-15 13:53:41.818342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.137 [2024-10-15 13:53:41.818351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:28.137 [2024-10-15 13:53:41.818358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:28.137 [2024-10-15 13:53:41.818365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.137 [2024-10-15 13:53:41.818476] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:28.137 [2024-10-15 13:53:41.818488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:28.137 [2024-10-15 13:53:41.818498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.137 [2024-10-15 13:53:41.818506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.137 [2024-10-15 13:53:41.818514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:28.137 [2024-10-15 13:53:41.818520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:28.138 [2024-10-15 13:53:41.818528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:28.138 [2024-10-15 13:53:41.818535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:28.138 [2024-10-15 13:53:41.818542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:28.138 [2024-10-15 13:53:41.818549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.138 [2024-10-15 13:53:41.818555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:28.138 [2024-10-15 13:53:41.818562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:28.138 [2024-10-15 13:53:41.818568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.138 [2024-10-15 13:53:41.818574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:28.138 [2024-10-15 13:53:41.818581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:28.138 [2024-10-15 13:53:41.818593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.138 [2024-10-15 13:53:41.818599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:28.138 [2024-10-15 13:53:41.818606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:28.138 [2024-10-15 13:53:41.818615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.138 [2024-10-15 13:53:41.818622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:28.138 [2024-10-15 13:53:41.818628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:28.138 [2024-10-15 13:53:41.818634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.138 [2024-10-15 13:53:41.818641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:28.138 [2024-10-15 13:53:41.818647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:28.138 [2024-10-15 13:53:41.818653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.138 [2024-10-15 13:53:41.818659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:28.138 [2024-10-15 13:53:41.818665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:28.138 [2024-10-15 13:53:41.818671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.138 [2024-10-15 13:53:41.818678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:28.138 [2024-10-15 13:53:41.818685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:28.138 [2024-10-15 13:53:41.818691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.138 [2024-10-15 13:53:41.818697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:28.138 [2024-10-15 13:53:41.818703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:28.138 [2024-10-15 13:53:41.818710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.138 [2024-10-15 13:53:41.818716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:28.138 [2024-10-15 13:53:41.818722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:28.138 [2024-10-15 13:53:41.818729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.138 [2024-10-15 13:53:41.818735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:28.138 [2024-10-15 13:53:41.818742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:28.138 [2024-10-15 13:53:41.818748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.138 [2024-10-15 13:53:41.818754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:28.138 [2024-10-15 13:53:41.818761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:28.138 [2024-10-15 13:53:41.818767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.138 [2024-10-15 13:53:41.818774] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:28.138 [2024-10-15 13:53:41.818781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:28.138 [2024-10-15 13:53:41.818789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.138 [2024-10-15 13:53:41.818796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.138 [2024-10-15 13:53:41.818803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:28.138 [2024-10-15 13:53:41.818809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:28.138 [2024-10-15 13:53:41.818816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:28.138 [2024-10-15 13:53:41.818823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:28.138 [2024-10-15 13:53:41.818830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:28.138 [2024-10-15 13:53:41.818837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:28.138 [2024-10-15 13:53:41.818846] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:28.138 [2024-10-15 13:53:41.818862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.138 [2024-10-15 13:53:41.818870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:28.138 [2024-10-15 13:53:41.818878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:28.138 [2024-10-15 13:53:41.818885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:28.138 [2024-10-15 13:53:41.818893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:28.138 [2024-10-15 13:53:41.818901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:28.138 [2024-10-15 13:53:41.818908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:28.138 [2024-10-15 13:53:41.818915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:28.138 [2024-10-15 13:53:41.818923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:28.138 [2024-10-15 13:53:41.818931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:28.138 [2024-10-15 13:53:41.818938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:28.138 [2024-10-15 13:53:41.818946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:28.138 [2024-10-15 13:53:41.818953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:28.138 [2024-10-15 13:53:41.818960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:28.138 [2024-10-15 13:53:41.818968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:28.138 [2024-10-15 13:53:41.818975] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:28.138 [2024-10-15 13:53:41.818983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.138 [2024-10-15 13:53:41.818993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:28.138 [2024-10-15 13:53:41.819000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:28.138 [2024-10-15 13:53:41.819006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:28.138 [2024-10-15 13:53:41.819013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:28.138 [2024-10-15 13:53:41.819021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.138 [2024-10-15 13:53:41.819028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:28.138 [2024-10-15 13:53:41.819035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:19:28.138 [2024-10-15 13:53:41.819043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.138 [2024-10-15 13:53:41.844595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.138 [2024-10-15 13:53:41.844743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.138 [2024-10-15 13:53:41.844759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.512 ms 00:19:28.138 [2024-10-15 13:53:41.844767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.138 [2024-10-15 13:53:41.844848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.138 [2024-10-15 13:53:41.844861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:28.138 [2024-10-15 13:53:41.844868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:28.138 [2024-10-15 13:53:41.844875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.138 [2024-10-15 13:53:41.883878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.138 [2024-10-15 13:53:41.883913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.138 [2024-10-15 13:53:41.883924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.958 ms 00:19:28.138 [2024-10-15 13:53:41.883932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.138 [2024-10-15 13:53:41.883969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.138 [2024-10-15 13:53:41.883979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.138 [2024-10-15 13:53:41.883995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:28.138 [2024-10-15 13:53:41.884003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.138 [2024-10-15 13:53:41.884369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.138 [2024-10-15 13:53:41.884385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.138 [2024-10-15 13:53:41.884394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:19:28.138 [2024-10-15 13:53:41.884401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.138 [2024-10-15 13:53:41.884522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.138 [2024-10-15 13:53:41.884531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.138 [2024-10-15 13:53:41.884540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:19:28.138 [2024-10-15 13:53:41.884547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.138 [2024-10-15 13:53:41.897512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.138 [2024-10-15 13:53:41.897540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.138 [2024-10-15 13:53:41.897550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.946 ms 00:19:28.138 [2024-10-15 13:53:41.897557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.138 [2024-10-15 13:53:41.909864] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:28.138 [2024-10-15 13:53:41.909897] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:28.139 [2024-10-15 13:53:41.909909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.139 [2024-10-15 13:53:41.909918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:28.139 [2024-10-15 13:53:41.909927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.250 ms 00:19:28.139 [2024-10-15 13:53:41.909935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.397 [2024-10-15 13:53:41.933988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.397 [2024-10-15 13:53:41.934130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:28.397 [2024-10-15 13:53:41.934147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.016 ms 00:19:28.397 [2024-10-15 13:53:41.934160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.397 [2024-10-15 13:53:41.945497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.397 [2024-10-15 13:53:41.945532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:28.397 [2024-10-15 13:53:41.945542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.314 ms 00:19:28.397 [2024-10-15 13:53:41.945549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.397 [2024-10-15 13:53:41.956909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.397 [2024-10-15 13:53:41.957029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:28.397 [2024-10-15 13:53:41.957044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.328 ms 00:19:28.397 [2024-10-15 13:53:41.957052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.397 [2024-10-15 13:53:41.957670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.397 [2024-10-15 13:53:41.957689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:28.397 [2024-10-15 13:53:41.957698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:19:28.398 [2024-10-15 13:53:41.957705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.398 [2024-10-15 13:53:42.012127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.398 [2024-10-15 13:53:42.012287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:28.398 [2024-10-15 13:53:42.012308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.403 ms 00:19:28.398 [2024-10-15 13:53:42.012317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.398 [2024-10-15 13:53:42.022505] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:28.398 [2024-10-15 13:53:42.024975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.398 [2024-10-15 13:53:42.025005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:28.398 [2024-10-15 13:53:42.025017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.613 ms 00:19:28.398 [2024-10-15 13:53:42.025026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.398 [2024-10-15 13:53:42.025122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.398 [2024-10-15 13:53:42.025134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:28.398 [2024-10-15 13:53:42.025143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:28.398 [2024-10-15 13:53:42.025151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.398 [2024-10-15 13:53:42.025213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.398 [2024-10-15 13:53:42.025246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:28.398 [2024-10-15 13:53:42.025255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:28.398 [2024-10-15 13:53:42.025262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.398 [2024-10-15 13:53:42.025281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.398 [2024-10-15 13:53:42.025289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:28.398 [2024-10-15 13:53:42.025297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:28.398 [2024-10-15 13:53:42.025304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.398 [2024-10-15 13:53:42.025332] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:28.398 [2024-10-15 13:53:42.025343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.398 [2024-10-15 13:53:42.025350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:28.398 [2024-10-15 13:53:42.025360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:28.398 [2024-10-15 13:53:42.025367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.398 [2024-10-15 13:53:42.049519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.398 [2024-10-15 13:53:42.049550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:28.398 [2024-10-15 13:53:42.049562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.135 ms 00:19:28.398 [2024-10-15 13:53:42.049570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.398 [2024-10-15 13:53:42.049638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.398 [2024-10-15 13:53:42.049649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:28.398 [2024-10-15 13:53:42.049658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:28.398 [2024-10-15 13:53:42.049665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.398 [2024-10-15 13:53:42.050547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 255.130 ms, result 0 00:19:29.333  [2024-10-15T13:53:44.494Z] Copying: 15/1024 [MB] (15 MBps) [2024-10-15T13:53:45.429Z] Copying: 34/1024 [MB] (18 MBps) [2024-10-15T13:53:46.364Z] Copying: 49/1024 [MB] (15 MBps) [2024-10-15T13:53:47.298Z] Copying: 61/1024 [MB] (11 MBps) [2024-10-15T13:53:48.231Z] Copying: 72/1024 [MB] (11 MBps) [2024-10-15T13:53:49.165Z] Copying: 83/1024 [MB] (11 MBps) [2024-10-15T13:53:50.114Z] Copying: 95/1024 [MB] (11 MBps) [2024-10-15T13:53:51.495Z] Copying: 106/1024 [MB] (11 MBps) [2024-10-15T13:53:52.428Z] Copying: 135/1024 [MB] (28 MBps) [2024-10-15T13:53:53.372Z] Copying: 157/1024 [MB] (22 MBps) [2024-10-15T13:53:54.305Z] Copying: 169/1024 [MB] (12 MBps) [2024-10-15T13:53:55.238Z] Copying: 185/1024 [MB] (15 MBps) [2024-10-15T13:53:56.172Z] Copying: 201/1024 [MB] (16 MBps) [2024-10-15T13:53:57.106Z] Copying: 222/1024 [MB] (20 MBps) [2024-10-15T13:53:58.478Z] Copying: 245/1024 [MB] (23 MBps) [2024-10-15T13:53:59.412Z] Copying: 261/1024 [MB] (16 MBps) [2024-10-15T13:54:00.347Z] Copying: 279/1024 [MB] (17 MBps) [2024-10-15T13:54:01.285Z] Copying: 298/1024 [MB] (18 MBps) [2024-10-15T13:54:02.224Z] Copying: 314/1024 [MB] (16 MBps) [2024-10-15T13:54:03.164Z] Copying: 333/1024 [MB] (19 MBps) [2024-10-15T13:54:04.106Z] Copying: 347/1024 [MB] (13 MBps) [2024-10-15T13:54:05.493Z] Copying: 361/1024 [MB] (14 MBps) [2024-10-15T13:54:06.080Z] Copying: 375/1024 [MB] (14 MBps) [2024-10-15T13:54:07.463Z] Copying: 393/1024 [MB] (18 MBps) [2024-10-15T13:54:08.398Z] Copying: 408/1024 [MB] (14 MBps) [2024-10-15T13:54:09.330Z] Copying: 421/1024 [MB] (12 MBps) [2024-10-15T13:54:10.264Z] Copying: 436/1024 [MB] (15 MBps) [2024-10-15T13:54:11.198Z] Copying: 452/1024 [MB] (15 MBps) [2024-10-15T13:54:12.131Z] Copying: 464/1024 [MB] (11 MBps) [2024-10-15T13:54:13.065Z] Copying: 476/1024 [MB] (11 MBps) [2024-10-15T13:54:14.441Z] Copying: 488/1024 [MB] (12 MBps) [2024-10-15T13:54:15.376Z] Copying: 501/1024 [MB] (12 MBps) [2024-10-15T13:54:16.313Z] Copying: 513/1024 [MB] (11 MBps) [2024-10-15T13:54:17.248Z] Copying: 527/1024 [MB] (14 MBps) [2024-10-15T13:54:18.182Z] Copying: 544/1024 [MB] (16 MBps) [2024-10-15T13:54:19.116Z] Copying: 564/1024 [MB] (19 MBps) [2024-10-15T13:54:20.491Z] Copying: 575/1024 [MB] (11 MBps) [2024-10-15T13:54:21.425Z] Copying: 586/1024 [MB] (10 MBps) [2024-10-15T13:54:22.385Z] Copying: 597/1024 [MB] (11 MBps) [2024-10-15T13:54:23.321Z] Copying: 613/1024 [MB] (15 MBps) [2024-10-15T13:54:24.257Z] Copying: 625/1024 [MB] (12 MBps) [2024-10-15T13:54:25.192Z] Copying: 637/1024 [MB] (12 MBps) [2024-10-15T13:54:26.126Z] Copying: 670/1024 [MB] (32 MBps) [2024-10-15T13:54:27.501Z] Copying: 683/1024 [MB] (12 MBps) [2024-10-15T13:54:28.067Z] Copying: 699/1024 [MB] (16 MBps) [2024-10-15T13:54:29.442Z] Copying: 713/1024 [MB] (13 MBps) [2024-10-15T13:54:30.380Z] Copying: 724/1024 [MB] (11 MBps) [2024-10-15T13:54:31.356Z] Copying: 742/1024 [MB] (17 MBps) [2024-10-15T13:54:32.290Z] Copying: 771/1024 [MB] (28 MBps) [2024-10-15T13:54:33.223Z] Copying: 815/1024 [MB] (44 MBps) [2024-10-15T13:54:34.158Z] Copying: 854/1024 [MB] (39 MBps) [2024-10-15T13:54:35.091Z] Copying: 882/1024 [MB] (27 MBps) [2024-10-15T13:54:36.469Z] Copying: 901/1024 [MB] (19 MBps) [2024-10-15T13:54:37.403Z] Copying: 924/1024 [MB] (22 MBps) [2024-10-15T13:54:38.337Z] Copying: 948/1024 [MB] (24 MBps) [2024-10-15T13:54:39.302Z] Copying: 968/1024 [MB] (20 MBps) [2024-10-15T13:54:40.236Z] Copying: 990/1024 [MB] (21 MBps) [2024-10-15T13:54:40.494Z] Copying: 1014/1024 [MB] (23 MBps) [2024-10-15T13:54:40.494Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-10-15 13:54:40.431829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.706 [2024-10-15 13:54:40.431872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:26.706 [2024-10-15 13:54:40.431885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:26.706 [2024-10-15 13:54:40.431893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.706 [2024-10-15 13:54:40.431913] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:26.706 [2024-10-15 13:54:40.434554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.706 [2024-10-15 13:54:40.434581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:26.706 [2024-10-15 13:54:40.434592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.627 ms 00:20:26.706 [2024-10-15 13:54:40.434601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.706 [2024-10-15 13:54:40.436364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.706 [2024-10-15 13:54:40.436399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:26.706 [2024-10-15 13:54:40.436409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.742 ms 00:20:26.706 [2024-10-15 13:54:40.436416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.706 [2024-10-15 13:54:40.452775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.706 [2024-10-15 13:54:40.452807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:26.706 [2024-10-15 13:54:40.452817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.344 ms 00:20:26.706 [2024-10-15 13:54:40.452824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.706 [2024-10-15 13:54:40.458986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.706 [2024-10-15 13:54:40.459013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:26.706 [2024-10-15 13:54:40.459028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.133 ms 00:20:26.706 [2024-10-15 13:54:40.459035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.706 [2024-10-15 13:54:40.483023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.706 [2024-10-15 13:54:40.483057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:26.706 [2024-10-15 13:54:40.483067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.945 ms 00:20:26.706 [2024-10-15 13:54:40.483074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.965 [2024-10-15 13:54:40.496295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.965 [2024-10-15 13:54:40.496442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:26.965 [2024-10-15 13:54:40.496458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.191 ms 00:20:26.965 [2024-10-15 13:54:40.496466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.965 [2024-10-15 13:54:40.496571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.965 [2024-10-15 13:54:40.496581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:26.965 [2024-10-15 13:54:40.496589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:26.965 [2024-10-15 13:54:40.496601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.965 [2024-10-15 13:54:40.519964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.965 [2024-10-15 13:54:40.520088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:26.965 [2024-10-15 13:54:40.520103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.350 ms 00:20:26.965 [2024-10-15 13:54:40.520110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.965 [2024-10-15 13:54:40.542819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.965 [2024-10-15 13:54:40.542930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:26.965 [2024-10-15 13:54:40.542952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.682 ms 00:20:26.965 [2024-10-15 13:54:40.542958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.965 [2024-10-15 13:54:40.565334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.965 [2024-10-15 13:54:40.565446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:26.965 [2024-10-15 13:54:40.565460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.349 ms 00:20:26.965 [2024-10-15 13:54:40.565467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.965 [2024-10-15 13:54:40.588049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.965 [2024-10-15 13:54:40.588078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:26.965 [2024-10-15 13:54:40.588088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.534 ms 00:20:26.965 [2024-10-15 13:54:40.588095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.965 [2024-10-15 13:54:40.588123] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:26.965 [2024-10-15 13:54:40.588137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:26.965 [2024-10-15 13:54:40.588147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:26.965 [2024-10-15 13:54:40.588155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:26.965 [2024-10-15 13:54:40.588163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:26.965 [2024-10-15 13:54:40.588170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:26.965 [2024-10-15 13:54:40.588177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:26.965 [2024-10-15 13:54:40.588185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:26.965 [2024-10-15 13:54:40.588192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:26.965 [2024-10-15 13:54:40.588200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:26.966 [2024-10-15 13:54:40.588897] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:26.966 [2024-10-15 13:54:40.588904] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8526d0e-2f90-448e-896b-b14df4f43d8b 00:20:26.966 [2024-10-15 13:54:40.588916] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:26.966 [2024-10-15 13:54:40.588923] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:26.966 [2024-10-15 13:54:40.588932] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:26.967 [2024-10-15 13:54:40.588940] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:26.967 [2024-10-15 13:54:40.588946] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:26.967 [2024-10-15 13:54:40.588953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:26.967 [2024-10-15 13:54:40.588960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:26.967 [2024-10-15 13:54:40.588971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:26.967 [2024-10-15 13:54:40.588978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:26.967 [2024-10-15 13:54:40.588984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.967 [2024-10-15 13:54:40.588991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:26.967 [2024-10-15 13:54:40.588999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:20:26.967 [2024-10-15 13:54:40.589006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.967 [2024-10-15 13:54:40.601177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.967 [2024-10-15 13:54:40.601204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:26.967 [2024-10-15 13:54:40.601214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.154 ms 00:20:26.967 [2024-10-15 13:54:40.601245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.967 [2024-10-15 13:54:40.601585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.967 [2024-10-15 13:54:40.601601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:26.967 [2024-10-15 13:54:40.601609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:20:26.967 [2024-10-15 13:54:40.601616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.967 [2024-10-15 13:54:40.634513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.967 [2024-10-15 13:54:40.634624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:26.967 [2024-10-15 13:54:40.634638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.967 [2024-10-15 13:54:40.634645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.967 [2024-10-15 13:54:40.634691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.967 [2024-10-15 13:54:40.634699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:26.967 [2024-10-15 13:54:40.634706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.967 [2024-10-15 13:54:40.634713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.967 [2024-10-15 13:54:40.634772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.967 [2024-10-15 13:54:40.634786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:26.967 [2024-10-15 13:54:40.634794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.967 [2024-10-15 13:54:40.634801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.967 [2024-10-15 13:54:40.634815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.967 [2024-10-15 13:54:40.634823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:26.967 [2024-10-15 13:54:40.634830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.967 [2024-10-15 13:54:40.634837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.967 [2024-10-15 13:54:40.711680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.967 [2024-10-15 13:54:40.711714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:26.967 [2024-10-15 13:54:40.711724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.967 [2024-10-15 13:54:40.711731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.225 [2024-10-15 13:54:40.774753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.225 [2024-10-15 13:54:40.774792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.225 [2024-10-15 13:54:40.774802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.225 [2024-10-15 13:54:40.774810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.225 [2024-10-15 13:54:40.774854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.225 [2024-10-15 13:54:40.774863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.225 [2024-10-15 13:54:40.774875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.225 [2024-10-15 13:54:40.774882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.225 [2024-10-15 13:54:40.774928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.225 [2024-10-15 13:54:40.774937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.225 [2024-10-15 13:54:40.774945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.225 [2024-10-15 13:54:40.774952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.225 [2024-10-15 13:54:40.775032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.225 [2024-10-15 13:54:40.775043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.225 [2024-10-15 13:54:40.775057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.225 [2024-10-15 13:54:40.775064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.225 [2024-10-15 13:54:40.775091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.225 [2024-10-15 13:54:40.775099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:27.225 [2024-10-15 13:54:40.775107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.225 [2024-10-15 13:54:40.775114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.225 [2024-10-15 13:54:40.775146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.225 [2024-10-15 13:54:40.775154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.225 [2024-10-15 13:54:40.775162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.225 [2024-10-15 13:54:40.775172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.225 [2024-10-15 13:54:40.775211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.225 [2024-10-15 13:54:40.775244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.225 [2024-10-15 13:54:40.775253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.225 [2024-10-15 13:54:40.775260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.225 [2024-10-15 13:54:40.775369] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.512 ms, result 0 00:20:28.160 00:20:28.160 00:20:28.160 13:54:41 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:28.160 [2024-10-15 13:54:41.802731] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:20:28.160 [2024-10-15 13:54:41.803036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75274 ] 00:20:28.420 [2024-10-15 13:54:41.952749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.420 [2024-10-15 13:54:42.046845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.681 [2024-10-15 13:54:42.328511] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:28.681 [2024-10-15 13:54:42.328587] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:28.944 [2024-10-15 13:54:42.491120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.944 [2024-10-15 13:54:42.491190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:28.944 [2024-10-15 13:54:42.491206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:28.944 [2024-10-15 13:54:42.491242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.944 [2024-10-15 13:54:42.491300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.944 [2024-10-15 13:54:42.491312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.944 [2024-10-15 13:54:42.491321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:28.944 [2024-10-15 13:54:42.491332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.944 [2024-10-15 13:54:42.491354] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:28.944 [2024-10-15 13:54:42.492066] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:28.944 [2024-10-15 13:54:42.492106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.944 [2024-10-15 13:54:42.492118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.944 [2024-10-15 13:54:42.492128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:20:28.944 [2024-10-15 13:54:42.492136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.944 [2024-10-15 13:54:42.493965] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:28.944 [2024-10-15 13:54:42.508347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.944 [2024-10-15 13:54:42.508400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:28.944 [2024-10-15 13:54:42.508414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.384 ms 00:20:28.944 [2024-10-15 13:54:42.508423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.944 [2024-10-15 13:54:42.508503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.944 [2024-10-15 13:54:42.508514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:28.944 [2024-10-15 13:54:42.508526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:28.944 [2024-10-15 13:54:42.508535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.944 [2024-10-15 13:54:42.516711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.944 [2024-10-15 13:54:42.516758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.944 [2024-10-15 13:54:42.516768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.098 ms 00:20:28.944 [2024-10-15 13:54:42.516776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.944 [2024-10-15 13:54:42.516860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.944 [2024-10-15 13:54:42.516870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.944 [2024-10-15 13:54:42.516879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:28.944 [2024-10-15 13:54:42.516888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.944 [2024-10-15 13:54:42.516933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.944 [2024-10-15 13:54:42.516943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:28.944 [2024-10-15 13:54:42.516952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:28.944 [2024-10-15 13:54:42.516961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.944 [2024-10-15 13:54:42.516984] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:28.944 [2024-10-15 13:54:42.521122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.944 [2024-10-15 13:54:42.521164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.944 [2024-10-15 13:54:42.521175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.144 ms 00:20:28.944 [2024-10-15 13:54:42.521184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.944 [2024-10-15 13:54:42.521240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.944 [2024-10-15 13:54:42.521250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:28.944 [2024-10-15 13:54:42.521259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:28.944 [2024-10-15 13:54:42.521267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.944 [2024-10-15 13:54:42.521321] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:28.944 [2024-10-15 13:54:42.521346] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:28.944 [2024-10-15 13:54:42.521383] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:28.944 [2024-10-15 13:54:42.521403] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:28.944 [2024-10-15 13:54:42.521510] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:28.944 [2024-10-15 13:54:42.521522] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:28.944 [2024-10-15 13:54:42.521534] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:28.944 [2024-10-15 13:54:42.521545] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:28.944 [2024-10-15 13:54:42.521554] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:28.944 [2024-10-15 13:54:42.521563] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:28.944 [2024-10-15 13:54:42.521570] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:28.944 [2024-10-15 13:54:42.521578] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:28.944 [2024-10-15 13:54:42.521586] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:28.944 [2024-10-15 13:54:42.521595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.944 [2024-10-15 13:54:42.521606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:28.945 [2024-10-15 13:54:42.521614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:20:28.945 [2024-10-15 13:54:42.521622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.945 [2024-10-15 13:54:42.521708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.945 [2024-10-15 13:54:42.521717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:28.945 [2024-10-15 13:54:42.521725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:28.945 [2024-10-15 13:54:42.521733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.945 [2024-10-15 13:54:42.521837] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:28.945 [2024-10-15 13:54:42.521848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:28.945 [2024-10-15 13:54:42.521861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.945 [2024-10-15 13:54:42.521869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.945 [2024-10-15 13:54:42.521877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:28.945 [2024-10-15 13:54:42.521884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:28.945 [2024-10-15 13:54:42.521891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:28.945 [2024-10-15 13:54:42.521898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:28.945 [2024-10-15 13:54:42.521907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:28.945 [2024-10-15 13:54:42.521913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.945 [2024-10-15 13:54:42.521920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:28.945 [2024-10-15 13:54:42.521927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:28.945 [2024-10-15 13:54:42.521933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.945 [2024-10-15 13:54:42.521940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:28.945 [2024-10-15 13:54:42.521947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:28.945 [2024-10-15 13:54:42.521960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.945 [2024-10-15 13:54:42.521971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:28.945 [2024-10-15 13:54:42.521977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:28.945 [2024-10-15 13:54:42.521984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.945 [2024-10-15 13:54:42.521991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:28.945 [2024-10-15 13:54:42.521998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:28.945 [2024-10-15 13:54:42.522005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.945 [2024-10-15 13:54:42.522011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:28.945 [2024-10-15 13:54:42.522017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:28.945 [2024-10-15 13:54:42.522023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.945 [2024-10-15 13:54:42.522030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:28.945 [2024-10-15 13:54:42.522036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:28.945 [2024-10-15 13:54:42.522042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.945 [2024-10-15 13:54:42.522050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:28.945 [2024-10-15 13:54:42.522057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:28.945 [2024-10-15 13:54:42.522064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.945 [2024-10-15 13:54:42.522070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:28.945 [2024-10-15 13:54:42.522077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:28.945 [2024-10-15 13:54:42.522083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.945 [2024-10-15 13:54:42.522090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:28.945 [2024-10-15 13:54:42.522098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:28.945 [2024-10-15 13:54:42.522105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.945 [2024-10-15 13:54:42.522112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:28.945 [2024-10-15 13:54:42.522118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:28.945 [2024-10-15 13:54:42.522124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.945 [2024-10-15 13:54:42.522131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:28.945 [2024-10-15 13:54:42.522137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:28.945 [2024-10-15 13:54:42.522145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.945 [2024-10-15 13:54:42.522152] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:28.945 [2024-10-15 13:54:42.522160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:28.945 [2024-10-15 13:54:42.522168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.945 [2024-10-15 13:54:42.522175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.945 [2024-10-15 13:54:42.522183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:28.945 [2024-10-15 13:54:42.522192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:28.945 [2024-10-15 13:54:42.522200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:28.945 [2024-10-15 13:54:42.522207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:28.945 [2024-10-15 13:54:42.522214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:28.945 [2024-10-15 13:54:42.522235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:28.945 [2024-10-15 13:54:42.522244] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:28.945 [2024-10-15 13:54:42.522254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.945 [2024-10-15 13:54:42.522263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:28.945 [2024-10-15 13:54:42.522271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:28.945 [2024-10-15 13:54:42.522279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:28.945 [2024-10-15 13:54:42.522286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:28.945 [2024-10-15 13:54:42.522294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:28.945 [2024-10-15 13:54:42.522302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:28.945 [2024-10-15 13:54:42.522311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:28.945 [2024-10-15 13:54:42.522319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:28.945 [2024-10-15 13:54:42.522327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:28.945 [2024-10-15 13:54:42.522335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:28.945 [2024-10-15 13:54:42.522344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:28.945 [2024-10-15 13:54:42.522350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:28.945 [2024-10-15 13:54:42.522358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:28.945 [2024-10-15 13:54:42.522365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:28.945 [2024-10-15 13:54:42.522373] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:28.945 [2024-10-15 13:54:42.522381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.945 [2024-10-15 13:54:42.522392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:28.945 [2024-10-15 13:54:42.522401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:28.945 [2024-10-15 13:54:42.522409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:28.945 [2024-10-15 13:54:42.522416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:28.945 [2024-10-15 13:54:42.522424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.945 [2024-10-15 13:54:42.522437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:28.945 [2024-10-15 13:54:42.522445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:20:28.945 [2024-10-15 13:54:42.522453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.945 [2024-10-15 13:54:42.554429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.945 [2024-10-15 13:54:42.554477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:28.945 [2024-10-15 13:54:42.554488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.931 ms 00:20:28.945 [2024-10-15 13:54:42.554497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.945 [2024-10-15 13:54:42.554587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.945 [2024-10-15 13:54:42.554603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:28.945 [2024-10-15 13:54:42.554612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:28.945 [2024-10-15 13:54:42.554620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.945 [2024-10-15 13:54:42.598979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.945 [2024-10-15 13:54:42.599033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:28.945 [2024-10-15 13:54:42.599048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.298 ms 00:20:28.945 [2024-10-15 13:54:42.599057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.945 [2024-10-15 13:54:42.599107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.945 [2024-10-15 13:54:42.599118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.945 [2024-10-15 13:54:42.599127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:28.945 [2024-10-15 13:54:42.599136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.945 [2024-10-15 13:54:42.599761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.945 [2024-10-15 13:54:42.599786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.945 [2024-10-15 13:54:42.599797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:20:28.945 [2024-10-15 13:54:42.599805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.945 [2024-10-15 13:54:42.599970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.946 [2024-10-15 13:54:42.599991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.946 [2024-10-15 13:54:42.600001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:20:28.946 [2024-10-15 13:54:42.600026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.946 [2024-10-15 13:54:42.615724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.946 [2024-10-15 13:54:42.615939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.946 [2024-10-15 13:54:42.615960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.673 ms 00:20:28.946 [2024-10-15 13:54:42.615975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.946 [2024-10-15 13:54:42.630650] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:28.946 [2024-10-15 13:54:42.630837] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:28.946 [2024-10-15 13:54:42.630857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.946 [2024-10-15 13:54:42.630866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:28.946 [2024-10-15 13:54:42.630877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.738 ms 00:20:28.946 [2024-10-15 13:54:42.630885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.946 [2024-10-15 13:54:42.656688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.946 [2024-10-15 13:54:42.656755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:28.946 [2024-10-15 13:54:42.656776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.670 ms 00:20:28.946 [2024-10-15 13:54:42.656785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.946 [2024-10-15 13:54:42.669923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.946 [2024-10-15 13:54:42.669968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:28.946 [2024-10-15 13:54:42.669981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.083 ms 00:20:28.946 [2024-10-15 13:54:42.669989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.946 [2024-10-15 13:54:42.682506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.946 [2024-10-15 13:54:42.682553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:28.946 [2024-10-15 13:54:42.682565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.466 ms 00:20:28.946 [2024-10-15 13:54:42.682573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.946 [2024-10-15 13:54:42.683250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.946 [2024-10-15 13:54:42.683282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:28.946 [2024-10-15 13:54:42.683294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:20:28.946 [2024-10-15 13:54:42.683303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.208 [2024-10-15 13:54:42.748580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.208 [2024-10-15 13:54:42.748833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:29.208 [2024-10-15 13:54:42.748860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.251 ms 00:20:29.208 [2024-10-15 13:54:42.748878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.208 [2024-10-15 13:54:42.760258] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:29.208 [2024-10-15 13:54:42.763388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.208 [2024-10-15 13:54:42.763564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:29.208 [2024-10-15 13:54:42.763584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.368 ms 00:20:29.208 [2024-10-15 13:54:42.763595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.208 [2024-10-15 13:54:42.763696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.208 [2024-10-15 13:54:42.763709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:29.208 [2024-10-15 13:54:42.763719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:29.208 [2024-10-15 13:54:42.763727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.208 [2024-10-15 13:54:42.763803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.208 [2024-10-15 13:54:42.763815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:29.208 [2024-10-15 13:54:42.763824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:29.208 [2024-10-15 13:54:42.763833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.208 [2024-10-15 13:54:42.763856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.208 [2024-10-15 13:54:42.763865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:29.208 [2024-10-15 13:54:42.763875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:29.208 [2024-10-15 13:54:42.763883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.208 [2024-10-15 13:54:42.763919] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:29.208 [2024-10-15 13:54:42.763931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.208 [2024-10-15 13:54:42.763942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:29.208 [2024-10-15 13:54:42.763951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:29.208 [2024-10-15 13:54:42.763959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.208 [2024-10-15 13:54:42.789951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.208 [2024-10-15 13:54:42.790005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:29.208 [2024-10-15 13:54:42.790020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.971 ms 00:20:29.208 [2024-10-15 13:54:42.790029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.208 [2024-10-15 13:54:42.790127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.208 [2024-10-15 13:54:42.790138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:29.208 [2024-10-15 13:54:42.790148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:29.208 [2024-10-15 13:54:42.790157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.208 [2024-10-15 13:54:42.791523] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.877 ms, result 0 00:20:30.595  [2024-10-15T13:54:45.327Z] Copying: 14/1024 [MB] (14 MBps) [2024-10-15T13:54:46.262Z] Copying: 30/1024 [MB] (16 MBps) [2024-10-15T13:54:47.203Z] Copying: 43/1024 [MB] (12 MBps) [2024-10-15T13:54:48.141Z] Copying: 63/1024 [MB] (19 MBps) [2024-10-15T13:54:49.075Z] Copying: 80/1024 [MB] (16 MBps) [2024-10-15T13:54:50.009Z] Copying: 92/1024 [MB] (12 MBps) [2024-10-15T13:54:51.386Z] Copying: 112/1024 [MB] (20 MBps) [2024-10-15T13:54:52.321Z] Copying: 126/1024 [MB] (13 MBps) [2024-10-15T13:54:53.264Z] Copying: 146/1024 [MB] (20 MBps) [2024-10-15T13:54:54.208Z] Copying: 165/1024 [MB] (18 MBps) [2024-10-15T13:54:55.227Z] Copying: 185/1024 [MB] (19 MBps) [2024-10-15T13:54:56.171Z] Copying: 200/1024 [MB] (14 MBps) [2024-10-15T13:54:57.114Z] Copying: 214764/1048576 [kB] (9964 kBps) [2024-10-15T13:54:58.058Z] Copying: 220/1024 [MB] (10 MBps) [2024-10-15T13:54:59.002Z] Copying: 231/1024 [MB] (11 MBps) [2024-10-15T13:55:00.392Z] Copying: 244/1024 [MB] (12 MBps) [2024-10-15T13:55:01.348Z] Copying: 260/1024 [MB] (16 MBps) [2024-10-15T13:55:02.293Z] Copying: 279/1024 [MB] (18 MBps) [2024-10-15T13:55:03.358Z] Copying: 296/1024 [MB] (17 MBps) [2024-10-15T13:55:04.311Z] Copying: 310/1024 [MB] (13 MBps) [2024-10-15T13:55:05.255Z] Copying: 324/1024 [MB] (13 MBps) [2024-10-15T13:55:06.199Z] Copying: 336/1024 [MB] (11 MBps) [2024-10-15T13:55:07.142Z] Copying: 356/1024 [MB] (19 MBps) [2024-10-15T13:55:08.084Z] Copying: 366/1024 [MB] (10 MBps) [2024-10-15T13:55:09.026Z] Copying: 387/1024 [MB] (21 MBps) [2024-10-15T13:55:10.410Z] Copying: 403/1024 [MB] (15 MBps) [2024-10-15T13:55:10.981Z] Copying: 418/1024 [MB] (15 MBps) [2024-10-15T13:55:12.364Z] Copying: 430/1024 [MB] (11 MBps) [2024-10-15T13:55:13.309Z] Copying: 443/1024 [MB] (13 MBps) [2024-10-15T13:55:14.251Z] Copying: 454/1024 [MB] (10 MBps) [2024-10-15T13:55:15.196Z] Copying: 474920/1048576 [kB] (9980 kBps) [2024-10-15T13:55:16.214Z] Copying: 479/1024 [MB] (15 MBps) [2024-10-15T13:55:17.159Z] Copying: 491/1024 [MB] (12 MBps) [2024-10-15T13:55:18.108Z] Copying: 506/1024 [MB] (15 MBps) [2024-10-15T13:55:19.053Z] Copying: 516/1024 [MB] (10 MBps) [2024-10-15T13:55:19.999Z] Copying: 527/1024 [MB] (11 MBps) [2024-10-15T13:55:21.388Z] Copying: 550700/1048576 [kB] (10104 kBps) [2024-10-15T13:55:22.332Z] Copying: 560608/1048576 [kB] (9908 kBps) [2024-10-15T13:55:23.276Z] Copying: 570360/1048576 [kB] (9752 kBps) [2024-10-15T13:55:24.219Z] Copying: 570/1024 [MB] (13 MBps) [2024-10-15T13:55:25.163Z] Copying: 583/1024 [MB] (12 MBps) [2024-10-15T13:55:26.106Z] Copying: 593/1024 [MB] (10 MBps) [2024-10-15T13:55:27.051Z] Copying: 604/1024 [MB] (10 MBps) [2024-10-15T13:55:27.997Z] Copying: 614/1024 [MB] (10 MBps) [2024-10-15T13:55:29.382Z] Copying: 625/1024 [MB] (10 MBps) [2024-10-15T13:55:30.326Z] Copying: 650304/1048576 [kB] (10232 kBps) [2024-10-15T13:55:31.268Z] Copying: 645/1024 [MB] (10 MBps) [2024-10-15T13:55:32.212Z] Copying: 655/1024 [MB] (10 MBps) [2024-10-15T13:55:33.156Z] Copying: 665/1024 [MB] (10 MBps) [2024-10-15T13:55:34.134Z] Copying: 676/1024 [MB] (10 MBps) [2024-10-15T13:55:35.076Z] Copying: 686/1024 [MB] (10 MBps) [2024-10-15T13:55:36.019Z] Copying: 696/1024 [MB] (10 MBps) [2024-10-15T13:55:37.406Z] Copying: 723520/1048576 [kB] (10224 kBps) [2024-10-15T13:55:37.978Z] Copying: 717/1024 [MB] (10 MBps) [2024-10-15T13:55:39.359Z] Copying: 727/1024 [MB] (10 MBps) [2024-10-15T13:55:40.307Z] Copying: 737/1024 [MB] (10 MBps) [2024-10-15T13:55:41.245Z] Copying: 748/1024 [MB] (10 MBps) [2024-10-15T13:55:42.178Z] Copying: 758/1024 [MB] (10 MBps) [2024-10-15T13:55:43.112Z] Copying: 771/1024 [MB] (13 MBps) [2024-10-15T13:55:44.047Z] Copying: 783/1024 [MB] (12 MBps) [2024-10-15T13:55:44.981Z] Copying: 796/1024 [MB] (13 MBps) [2024-10-15T13:55:46.383Z] Copying: 810/1024 [MB] (13 MBps) [2024-10-15T13:55:47.317Z] Copying: 822/1024 [MB] (12 MBps) [2024-10-15T13:55:48.251Z] Copying: 835/1024 [MB] (12 MBps) [2024-10-15T13:55:49.185Z] Copying: 846/1024 [MB] (11 MBps) [2024-10-15T13:55:50.119Z] Copying: 858/1024 [MB] (11 MBps) [2024-10-15T13:55:51.055Z] Copying: 871/1024 [MB] (13 MBps) [2024-10-15T13:55:51.988Z] Copying: 885/1024 [MB] (13 MBps) [2024-10-15T13:55:53.363Z] Copying: 896/1024 [MB] (11 MBps) [2024-10-15T13:55:54.323Z] Copying: 907/1024 [MB] (11 MBps) [2024-10-15T13:55:55.256Z] Copying: 954/1024 [MB] (47 MBps) [2024-10-15T13:55:55.513Z] Copying: 1004/1024 [MB] (49 MBps) [2024-10-15T13:55:55.513Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-10-15 13:55:55.433994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.725 [2024-10-15 13:55:55.434058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:41.725 [2024-10-15 13:55:55.434073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:41.725 [2024-10-15 13:55:55.434081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.725 [2024-10-15 13:55:55.434105] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:41.725 [2024-10-15 13:55:55.436816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.725 [2024-10-15 13:55:55.436855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:41.725 [2024-10-15 13:55:55.436865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.696 ms 00:21:41.725 [2024-10-15 13:55:55.436873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.725 [2024-10-15 13:55:55.437094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.725 [2024-10-15 13:55:55.437103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:41.725 [2024-10-15 13:55:55.437112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:21:41.725 [2024-10-15 13:55:55.437120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.725 [2024-10-15 13:55:55.441798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.725 [2024-10-15 13:55:55.441824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:41.725 [2024-10-15 13:55:55.441834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.663 ms 00:21:41.725 [2024-10-15 13:55:55.441844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.725 [2024-10-15 13:55:55.447979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.725 [2024-10-15 13:55:55.448011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:41.725 [2024-10-15 13:55:55.448021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:21:41.725 [2024-10-15 13:55:55.448030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.725 [2024-10-15 13:55:55.474490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.725 [2024-10-15 13:55:55.474531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:41.725 [2024-10-15 13:55:55.474544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.395 ms 00:21:41.725 [2024-10-15 13:55:55.474551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.725 [2024-10-15 13:55:55.489865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.725 [2024-10-15 13:55:55.489908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:41.725 [2024-10-15 13:55:55.489921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.288 ms 00:21:41.725 [2024-10-15 13:55:55.489929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.725 [2024-10-15 13:55:55.490089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.725 [2024-10-15 13:55:55.490100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:41.725 [2024-10-15 13:55:55.490112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:21:41.725 [2024-10-15 13:55:55.490119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.984 [2024-10-15 13:55:55.513670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.984 [2024-10-15 13:55:55.513710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:41.984 [2024-10-15 13:55:55.513722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.536 ms 00:21:41.984 [2024-10-15 13:55:55.513729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.984 [2024-10-15 13:55:55.536157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.984 [2024-10-15 13:55:55.536204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:41.984 [2024-10-15 13:55:55.536217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.406 ms 00:21:41.984 [2024-10-15 13:55:55.536237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.984 [2024-10-15 13:55:55.558633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.984 [2024-10-15 13:55:55.558681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:41.984 [2024-10-15 13:55:55.558691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.374 ms 00:21:41.984 [2024-10-15 13:55:55.558699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.984 [2024-10-15 13:55:55.580766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.984 [2024-10-15 13:55:55.580797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:41.984 [2024-10-15 13:55:55.580809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.023 ms 00:21:41.984 [2024-10-15 13:55:55.580816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.984 [2024-10-15 13:55:55.580836] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:41.984 [2024-10-15 13:55:55.580850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.580999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.581006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.581014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:41.984 [2024-10-15 13:55:55.581022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:41.985 [2024-10-15 13:55:55.581622] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:41.985 [2024-10-15 13:55:55.581634] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8526d0e-2f90-448e-896b-b14df4f43d8b 00:21:41.985 [2024-10-15 13:55:55.581642] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:41.985 [2024-10-15 13:55:55.581651] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:41.985 [2024-10-15 13:55:55.581659] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:41.985 [2024-10-15 13:55:55.581666] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:41.985 [2024-10-15 13:55:55.581673] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:41.985 [2024-10-15 13:55:55.581680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:41.985 [2024-10-15 13:55:55.581694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:41.985 [2024-10-15 13:55:55.581700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:41.985 [2024-10-15 13:55:55.581707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:41.985 [2024-10-15 13:55:55.581714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.985 [2024-10-15 13:55:55.581721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:41.985 [2024-10-15 13:55:55.581730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:21:41.985 [2024-10-15 13:55:55.581737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.985 [2024-10-15 13:55:55.594166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.985 [2024-10-15 13:55:55.594197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:41.985 [2024-10-15 13:55:55.594208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.413 ms 00:21:41.985 [2024-10-15 13:55:55.594216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.985 [2024-10-15 13:55:55.594577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.985 [2024-10-15 13:55:55.594591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:41.985 [2024-10-15 13:55:55.594599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:21:41.986 [2024-10-15 13:55:55.594606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.986 [2024-10-15 13:55:55.626925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.986 [2024-10-15 13:55:55.626968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.986 [2024-10-15 13:55:55.626979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.986 [2024-10-15 13:55:55.626987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.986 [2024-10-15 13:55:55.627047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.986 [2024-10-15 13:55:55.627056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.986 [2024-10-15 13:55:55.627063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.986 [2024-10-15 13:55:55.627070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.986 [2024-10-15 13:55:55.627135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.986 [2024-10-15 13:55:55.627145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.986 [2024-10-15 13:55:55.627152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.986 [2024-10-15 13:55:55.627159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.986 [2024-10-15 13:55:55.627174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.986 [2024-10-15 13:55:55.627182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.986 [2024-10-15 13:55:55.627189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.986 [2024-10-15 13:55:55.627197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.986 [2024-10-15 13:55:55.703326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.986 [2024-10-15 13:55:55.703370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:41.986 [2024-10-15 13:55:55.703381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.986 [2024-10-15 13:55:55.703388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.986 [2024-10-15 13:55:55.766023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.986 [2024-10-15 13:55:55.766066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:41.986 [2024-10-15 13:55:55.766078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.986 [2024-10-15 13:55:55.766085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.986 [2024-10-15 13:55:55.766154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.986 [2024-10-15 13:55:55.766166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:41.986 [2024-10-15 13:55:55.766174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.986 [2024-10-15 13:55:55.766181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.986 [2024-10-15 13:55:55.766213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.986 [2024-10-15 13:55:55.766236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:41.986 [2024-10-15 13:55:55.766244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.986 [2024-10-15 13:55:55.766251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.986 [2024-10-15 13:55:55.766333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.986 [2024-10-15 13:55:55.766344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:41.986 [2024-10-15 13:55:55.766353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.986 [2024-10-15 13:55:55.766359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.986 [2024-10-15 13:55:55.766388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.986 [2024-10-15 13:55:55.766397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:41.986 [2024-10-15 13:55:55.766404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.986 [2024-10-15 13:55:55.766411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.986 [2024-10-15 13:55:55.766442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.986 [2024-10-15 13:55:55.766449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:41.986 [2024-10-15 13:55:55.766459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.986 [2024-10-15 13:55:55.766466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.986 [2024-10-15 13:55:55.766503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.986 [2024-10-15 13:55:55.766512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:41.986 [2024-10-15 13:55:55.766519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.986 [2024-10-15 13:55:55.766527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.986 [2024-10-15 13:55:55.766632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.612 ms, result 0 00:21:42.920 00:21:42.920 00:21:42.920 13:55:56 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:44.821 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:44.821 13:55:58 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:44.821 [2024-10-15 13:55:58.583159] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:21:44.821 [2024-10-15 13:55:58.583281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76067 ] 00:21:45.080 [2024-10-15 13:55:58.729106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.080 [2024-10-15 13:55:58.828643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.340 [2024-10-15 13:55:59.080471] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:45.340 [2024-10-15 13:55:59.080532] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:45.600 [2024-10-15 13:55:59.233428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.600 [2024-10-15 13:55:59.233486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:45.600 [2024-10-15 13:55:59.233499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:45.600 [2024-10-15 13:55:59.233512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.600 [2024-10-15 13:55:59.233557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.600 [2024-10-15 13:55:59.233568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:45.600 [2024-10-15 13:55:59.233576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:45.600 [2024-10-15 13:55:59.233585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.600 [2024-10-15 13:55:59.233605] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:45.600 [2024-10-15 13:55:59.234306] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:45.600 [2024-10-15 13:55:59.234333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.600 [2024-10-15 13:55:59.234343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:45.600 [2024-10-15 13:55:59.234351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:21:45.600 [2024-10-15 13:55:59.234359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.600 [2024-10-15 13:55:59.235425] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:45.600 [2024-10-15 13:55:59.247682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.600 [2024-10-15 13:55:59.247717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:45.600 [2024-10-15 13:55:59.247729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.258 ms 00:21:45.600 [2024-10-15 13:55:59.247738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.600 [2024-10-15 13:55:59.247799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.600 [2024-10-15 13:55:59.247808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:45.600 [2024-10-15 13:55:59.247819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:45.600 [2024-10-15 13:55:59.247826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.600 [2024-10-15 13:55:59.252866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.600 [2024-10-15 13:55:59.252906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:45.600 [2024-10-15 13:55:59.252917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.978 ms 00:21:45.600 [2024-10-15 13:55:59.252925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.600 [2024-10-15 13:55:59.253030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.600 [2024-10-15 13:55:59.253039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:45.600 [2024-10-15 13:55:59.253051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:45.600 [2024-10-15 13:55:59.253062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.600 [2024-10-15 13:55:59.253111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.600 [2024-10-15 13:55:59.253120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:45.600 [2024-10-15 13:55:59.253127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:45.600 [2024-10-15 13:55:59.253135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.600 [2024-10-15 13:55:59.253157] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:45.600 [2024-10-15 13:55:59.256475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.600 [2024-10-15 13:55:59.256504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:45.600 [2024-10-15 13:55:59.256513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.323 ms 00:21:45.600 [2024-10-15 13:55:59.256520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.600 [2024-10-15 13:55:59.256551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.600 [2024-10-15 13:55:59.256559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:45.600 [2024-10-15 13:55:59.256567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:45.600 [2024-10-15 13:55:59.256574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.600 [2024-10-15 13:55:59.256593] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:45.600 [2024-10-15 13:55:59.256611] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:45.600 [2024-10-15 13:55:59.256644] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:45.600 [2024-10-15 13:55:59.256661] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:45.600 [2024-10-15 13:55:59.256763] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:45.600 [2024-10-15 13:55:59.256780] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:45.600 [2024-10-15 13:55:59.256790] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:45.600 [2024-10-15 13:55:59.256800] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:45.600 [2024-10-15 13:55:59.256809] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:45.600 [2024-10-15 13:55:59.256817] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:45.600 [2024-10-15 13:55:59.256824] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:45.600 [2024-10-15 13:55:59.256831] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:45.600 [2024-10-15 13:55:59.256839] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:45.601 [2024-10-15 13:55:59.256846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.601 [2024-10-15 13:55:59.256855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:45.601 [2024-10-15 13:55:59.256863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:21:45.601 [2024-10-15 13:55:59.256870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.601 [2024-10-15 13:55:59.256951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.601 [2024-10-15 13:55:59.256959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:45.601 [2024-10-15 13:55:59.256967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:45.601 [2024-10-15 13:55:59.256974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.601 [2024-10-15 13:55:59.257091] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:45.601 [2024-10-15 13:55:59.257108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:45.601 [2024-10-15 13:55:59.257119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:45.601 [2024-10-15 13:55:59.257127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.601 [2024-10-15 13:55:59.257134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:45.601 [2024-10-15 13:55:59.257141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:45.601 [2024-10-15 13:55:59.257148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:45.601 [2024-10-15 13:55:59.257154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:45.601 [2024-10-15 13:55:59.257161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:45.601 [2024-10-15 13:55:59.257167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:45.601 [2024-10-15 13:55:59.257174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:45.601 [2024-10-15 13:55:59.257181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:45.601 [2024-10-15 13:55:59.257188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:45.601 [2024-10-15 13:55:59.257194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:45.601 [2024-10-15 13:55:59.257201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:45.601 [2024-10-15 13:55:59.257213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.601 [2024-10-15 13:55:59.257230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:45.601 [2024-10-15 13:55:59.257237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:45.601 [2024-10-15 13:55:59.257244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.601 [2024-10-15 13:55:59.257252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:45.601 [2024-10-15 13:55:59.257258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:45.601 [2024-10-15 13:55:59.257265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.601 [2024-10-15 13:55:59.257271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:45.601 [2024-10-15 13:55:59.257278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:45.601 [2024-10-15 13:55:59.257284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.601 [2024-10-15 13:55:59.257290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:45.601 [2024-10-15 13:55:59.257297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:45.601 [2024-10-15 13:55:59.257303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.601 [2024-10-15 13:55:59.257310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:45.601 [2024-10-15 13:55:59.257316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:45.601 [2024-10-15 13:55:59.257322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.601 [2024-10-15 13:55:59.257329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:45.601 [2024-10-15 13:55:59.257336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:45.601 [2024-10-15 13:55:59.257342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:45.601 [2024-10-15 13:55:59.257348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:45.601 [2024-10-15 13:55:59.257355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:45.601 [2024-10-15 13:55:59.257361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:45.601 [2024-10-15 13:55:59.257368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:45.601 [2024-10-15 13:55:59.257374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:45.601 [2024-10-15 13:55:59.257380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.601 [2024-10-15 13:55:59.257387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:45.601 [2024-10-15 13:55:59.257393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:45.601 [2024-10-15 13:55:59.257399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.601 [2024-10-15 13:55:59.257405] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:45.601 [2024-10-15 13:55:59.257413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:45.601 [2024-10-15 13:55:59.257420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:45.601 [2024-10-15 13:55:59.257427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.601 [2024-10-15 13:55:59.257434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:45.601 [2024-10-15 13:55:59.257440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:45.601 [2024-10-15 13:55:59.257447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:45.601 [2024-10-15 13:55:59.257455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:45.601 [2024-10-15 13:55:59.257461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:45.601 [2024-10-15 13:55:59.257467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:45.601 [2024-10-15 13:55:59.257476] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:45.601 [2024-10-15 13:55:59.257485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:45.601 [2024-10-15 13:55:59.257493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:45.601 [2024-10-15 13:55:59.257500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:45.601 [2024-10-15 13:55:59.257506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:45.601 [2024-10-15 13:55:59.257513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:45.601 [2024-10-15 13:55:59.257520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:45.601 [2024-10-15 13:55:59.257527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:45.601 [2024-10-15 13:55:59.257534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:45.601 [2024-10-15 13:55:59.257540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:45.601 [2024-10-15 13:55:59.257547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:45.601 [2024-10-15 13:55:59.257554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:45.601 [2024-10-15 13:55:59.257561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:45.601 [2024-10-15 13:55:59.257568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:45.601 [2024-10-15 13:55:59.257575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:45.601 [2024-10-15 13:55:59.257582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:45.601 [2024-10-15 13:55:59.257588] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:45.601 [2024-10-15 13:55:59.257596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:45.601 [2024-10-15 13:55:59.257606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:45.601 [2024-10-15 13:55:59.257614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:45.601 [2024-10-15 13:55:59.257621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:45.601 [2024-10-15 13:55:59.257628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:45.601 [2024-10-15 13:55:59.257635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.601 [2024-10-15 13:55:59.257642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:45.601 [2024-10-15 13:55:59.257649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:21:45.601 [2024-10-15 13:55:59.257656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.601 [2024-10-15 13:55:59.283108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.601 [2024-10-15 13:55:59.283152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:45.601 [2024-10-15 13:55:59.283163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.410 ms 00:21:45.601 [2024-10-15 13:55:59.283170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.601 [2024-10-15 13:55:59.283267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.601 [2024-10-15 13:55:59.283280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:45.601 [2024-10-15 13:55:59.283288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:45.601 [2024-10-15 13:55:59.283295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.601 [2024-10-15 13:55:59.327199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.601 [2024-10-15 13:55:59.327254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:45.601 [2024-10-15 13:55:59.327267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.849 ms 00:21:45.601 [2024-10-15 13:55:59.327275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.601 [2024-10-15 13:55:59.327329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.601 [2024-10-15 13:55:59.327339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:45.601 [2024-10-15 13:55:59.327348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:45.601 [2024-10-15 13:55:59.327355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.601 [2024-10-15 13:55:59.327733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.601 [2024-10-15 13:55:59.327750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:45.601 [2024-10-15 13:55:59.327759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:21:45.601 [2024-10-15 13:55:59.327766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.602 [2024-10-15 13:55:59.327894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.602 [2024-10-15 13:55:59.327913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:45.602 [2024-10-15 13:55:59.327922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:45.602 [2024-10-15 13:55:59.327929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.602 [2024-10-15 13:55:59.340777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.602 [2024-10-15 13:55:59.340809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:45.602 [2024-10-15 13:55:59.340819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.828 ms 00:21:45.602 [2024-10-15 13:55:59.340830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.602 [2024-10-15 13:55:59.358582] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:45.602 [2024-10-15 13:55:59.358646] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:45.602 [2024-10-15 13:55:59.358665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.602 [2024-10-15 13:55:59.358674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:45.602 [2024-10-15 13:55:59.358687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.735 ms 00:21:45.602 [2024-10-15 13:55:59.358695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.602 [2024-10-15 13:55:59.380292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.602 [2024-10-15 13:55:59.380336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:45.602 [2024-10-15 13:55:59.380354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.514 ms 00:21:45.602 [2024-10-15 13:55:59.380361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.861 [2024-10-15 13:55:59.389604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.861 [2024-10-15 13:55:59.389638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:45.861 [2024-10-15 13:55:59.389646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.200 ms 00:21:45.861 [2024-10-15 13:55:59.389653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.861 [2024-10-15 13:55:59.398203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.861 [2024-10-15 13:55:59.398239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:45.861 [2024-10-15 13:55:59.398247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.517 ms 00:21:45.861 [2024-10-15 13:55:59.398253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.861 [2024-10-15 13:55:59.398747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.861 [2024-10-15 13:55:59.398768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:45.861 [2024-10-15 13:55:59.398777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:21:45.861 [2024-10-15 13:55:59.398783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.861 [2024-10-15 13:55:59.447838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.861 [2024-10-15 13:55:59.447887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:45.861 [2024-10-15 13:55:59.447900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.036 ms 00:21:45.861 [2024-10-15 13:55:59.447912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.861 [2024-10-15 13:55:59.456320] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:45.861 [2024-10-15 13:55:59.458739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.861 [2024-10-15 13:55:59.458767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:45.861 [2024-10-15 13:55:59.458779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.783 ms 00:21:45.861 [2024-10-15 13:55:59.458787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.861 [2024-10-15 13:55:59.458881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.861 [2024-10-15 13:55:59.458891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:45.861 [2024-10-15 13:55:59.458898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:45.861 [2024-10-15 13:55:59.458905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.861 [2024-10-15 13:55:59.458971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.861 [2024-10-15 13:55:59.458980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:45.861 [2024-10-15 13:55:59.458988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:45.861 [2024-10-15 13:55:59.458994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.861 [2024-10-15 13:55:59.459010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.861 [2024-10-15 13:55:59.459018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:45.861 [2024-10-15 13:55:59.459025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:45.861 [2024-10-15 13:55:59.459032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.861 [2024-10-15 13:55:59.459060] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:45.861 [2024-10-15 13:55:59.459070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.861 [2024-10-15 13:55:59.459079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:45.861 [2024-10-15 13:55:59.459086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:45.861 [2024-10-15 13:55:59.459092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.861 [2024-10-15 13:55:59.477591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.861 [2024-10-15 13:55:59.477622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:45.861 [2024-10-15 13:55:59.477633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.483 ms 00:21:45.861 [2024-10-15 13:55:59.477640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.861 [2024-10-15 13:55:59.477708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.861 [2024-10-15 13:55:59.477717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:45.861 [2024-10-15 13:55:59.477724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:45.861 [2024-10-15 13:55:59.477731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.861 [2024-10-15 13:55:59.478940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 245.119 ms, result 0 00:21:46.796  [2024-10-15T13:56:01.518Z] Copying: 44/1024 [MB] (44 MBps) [2024-10-15T13:56:02.913Z] Copying: 95/1024 [MB] (51 MBps) [2024-10-15T13:56:03.847Z] Copying: 148/1024 [MB] (52 MBps) [2024-10-15T13:56:04.780Z] Copying: 194/1024 [MB] (45 MBps) [2024-10-15T13:56:05.714Z] Copying: 238/1024 [MB] (44 MBps) [2024-10-15T13:56:06.649Z] Copying: 283/1024 [MB] (44 MBps) [2024-10-15T13:56:07.586Z] Copying: 328/1024 [MB] (44 MBps) [2024-10-15T13:56:08.521Z] Copying: 372/1024 [MB] (43 MBps) [2024-10-15T13:56:09.909Z] Copying: 422/1024 [MB] (50 MBps) [2024-10-15T13:56:10.843Z] Copying: 460/1024 [MB] (37 MBps) [2024-10-15T13:56:11.776Z] Copying: 506/1024 [MB] (46 MBps) [2024-10-15T13:56:12.709Z] Copying: 553/1024 [MB] (46 MBps) [2024-10-15T13:56:13.644Z] Copying: 599/1024 [MB] (46 MBps) [2024-10-15T13:56:14.586Z] Copying: 646/1024 [MB] (46 MBps) [2024-10-15T13:56:15.526Z] Copying: 691/1024 [MB] (45 MBps) [2024-10-15T13:56:16.912Z] Copying: 728/1024 [MB] (37 MBps) [2024-10-15T13:56:17.853Z] Copying: 774/1024 [MB] (45 MBps) [2024-10-15T13:56:18.799Z] Copying: 820/1024 [MB] (45 MBps) [2024-10-15T13:56:19.737Z] Copying: 867/1024 [MB] (47 MBps) [2024-10-15T13:56:20.678Z] Copying: 905/1024 [MB] (38 MBps) [2024-10-15T13:56:21.618Z] Copying: 935/1024 [MB] (29 MBps) [2024-10-15T13:56:22.558Z] Copying: 972/1024 [MB] (37 MBps) [2024-10-15T13:56:23.933Z] Copying: 998/1024 [MB] (26 MBps) [2024-10-15T13:56:24.500Z] Copying: 1023/1024 [MB] (24 MBps) [2024-10-15T13:56:24.501Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-10-15 13:56:24.220123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.713 [2024-10-15 13:56:24.220345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:10.713 [2024-10-15 13:56:24.220367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:10.713 [2024-10-15 13:56:24.220375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.713 [2024-10-15 13:56:24.221720] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:10.713 [2024-10-15 13:56:24.224695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.713 [2024-10-15 13:56:24.224730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:10.713 [2024-10-15 13:56:24.224741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.937 ms 00:22:10.713 [2024-10-15 13:56:24.224749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.713 [2024-10-15 13:56:24.235446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.713 [2024-10-15 13:56:24.235480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:10.713 [2024-10-15 13:56:24.235491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.392 ms 00:22:10.713 [2024-10-15 13:56:24.235498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.713 [2024-10-15 13:56:24.253269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.713 [2024-10-15 13:56:24.253306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:10.713 [2024-10-15 13:56:24.253316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.756 ms 00:22:10.713 [2024-10-15 13:56:24.253324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.713 [2024-10-15 13:56:24.259391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.713 [2024-10-15 13:56:24.259419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:10.713 [2024-10-15 13:56:24.259430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.041 ms 00:22:10.713 [2024-10-15 13:56:24.259439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.713 [2024-10-15 13:56:24.282605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.713 [2024-10-15 13:56:24.282642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:10.713 [2024-10-15 13:56:24.282654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.126 ms 00:22:10.713 [2024-10-15 13:56:24.282661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.713 [2024-10-15 13:56:24.296537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.713 [2024-10-15 13:56:24.296571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:10.713 [2024-10-15 13:56:24.296586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.844 ms 00:22:10.713 [2024-10-15 13:56:24.296594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.713 [2024-10-15 13:56:24.346190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.713 [2024-10-15 13:56:24.346240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:10.713 [2024-10-15 13:56:24.346251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.560 ms 00:22:10.713 [2024-10-15 13:56:24.346259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.713 [2024-10-15 13:56:24.369545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.713 [2024-10-15 13:56:24.369582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:10.713 [2024-10-15 13:56:24.369592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.273 ms 00:22:10.713 [2024-10-15 13:56:24.369599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.713 [2024-10-15 13:56:24.392067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.713 [2024-10-15 13:56:24.392107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:10.713 [2024-10-15 13:56:24.392117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.437 ms 00:22:10.713 [2024-10-15 13:56:24.392124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.713 [2024-10-15 13:56:24.414199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.713 [2024-10-15 13:56:24.414239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:10.713 [2024-10-15 13:56:24.414249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.045 ms 00:22:10.713 [2024-10-15 13:56:24.414256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.713 [2024-10-15 13:56:24.436481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.713 [2024-10-15 13:56:24.436513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:10.713 [2024-10-15 13:56:24.436523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.172 ms 00:22:10.713 [2024-10-15 13:56:24.436530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.713 [2024-10-15 13:56:24.436559] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:10.713 [2024-10-15 13:56:24.436572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 119552 / 261120 wr_cnt: 1 state: open 00:22:10.713 [2024-10-15 13:56:24.436581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:10.713 [2024-10-15 13:56:24.436952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.436959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.436967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.436974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.436981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.436988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.436995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:10.714 [2024-10-15 13:56:24.437333] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:10.714 [2024-10-15 13:56:24.437341] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8526d0e-2f90-448e-896b-b14df4f43d8b 00:22:10.714 [2024-10-15 13:56:24.437349] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 119552 00:22:10.714 [2024-10-15 13:56:24.437356] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 120512 00:22:10.714 [2024-10-15 13:56:24.437363] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 119552 00:22:10.714 [2024-10-15 13:56:24.437370] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 00:22:10.714 [2024-10-15 13:56:24.437377] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:10.714 [2024-10-15 13:56:24.437384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:10.714 [2024-10-15 13:56:24.437398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:10.714 [2024-10-15 13:56:24.437404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:10.714 [2024-10-15 13:56:24.437410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:10.714 [2024-10-15 13:56:24.437418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.714 [2024-10-15 13:56:24.437428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:10.714 [2024-10-15 13:56:24.437436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:22:10.714 [2024-10-15 13:56:24.437443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.714 [2024-10-15 13:56:24.449801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.714 [2024-10-15 13:56:24.449832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:10.714 [2024-10-15 13:56:24.449842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.343 ms 00:22:10.714 [2024-10-15 13:56:24.449849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.714 [2024-10-15 13:56:24.450187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.714 [2024-10-15 13:56:24.450201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:10.714 [2024-10-15 13:56:24.450210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:22:10.714 [2024-10-15 13:56:24.450217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.714 [2024-10-15 13:56:24.482664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.714 [2024-10-15 13:56:24.482701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:10.714 [2024-10-15 13:56:24.482710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.714 [2024-10-15 13:56:24.482722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.714 [2024-10-15 13:56:24.482777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.714 [2024-10-15 13:56:24.482785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:10.714 [2024-10-15 13:56:24.482792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.714 [2024-10-15 13:56:24.482799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.714 [2024-10-15 13:56:24.482853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.714 [2024-10-15 13:56:24.482862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:10.714 [2024-10-15 13:56:24.482869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.714 [2024-10-15 13:56:24.482877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.714 [2024-10-15 13:56:24.482893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.714 [2024-10-15 13:56:24.482901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:10.714 [2024-10-15 13:56:24.482908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.714 [2024-10-15 13:56:24.482916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.973 [2024-10-15 13:56:24.559386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.973 [2024-10-15 13:56:24.559429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:10.973 [2024-10-15 13:56:24.559440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.973 [2024-10-15 13:56:24.559451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.973 [2024-10-15 13:56:24.622445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.973 [2024-10-15 13:56:24.622488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:10.973 [2024-10-15 13:56:24.622499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.973 [2024-10-15 13:56:24.622507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.973 [2024-10-15 13:56:24.622580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.973 [2024-10-15 13:56:24.622589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:10.973 [2024-10-15 13:56:24.622598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.973 [2024-10-15 13:56:24.622605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.973 [2024-10-15 13:56:24.622636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.973 [2024-10-15 13:56:24.622648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:10.973 [2024-10-15 13:56:24.622656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.973 [2024-10-15 13:56:24.622664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.973 [2024-10-15 13:56:24.622745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.973 [2024-10-15 13:56:24.622754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:10.973 [2024-10-15 13:56:24.622762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.973 [2024-10-15 13:56:24.622769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.973 [2024-10-15 13:56:24.622797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.973 [2024-10-15 13:56:24.622809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:10.973 [2024-10-15 13:56:24.622816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.973 [2024-10-15 13:56:24.622824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.973 [2024-10-15 13:56:24.622856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.973 [2024-10-15 13:56:24.622864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:10.973 [2024-10-15 13:56:24.622872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.973 [2024-10-15 13:56:24.622879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.973 [2024-10-15 13:56:24.622917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.973 [2024-10-15 13:56:24.622930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:10.973 [2024-10-15 13:56:24.622938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.973 [2024-10-15 13:56:24.622945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.973 [2024-10-15 13:56:24.623049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 403.632 ms, result 0 00:22:12.874 00:22:12.874 00:22:12.874 13:56:26 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:12.874 [2024-10-15 13:56:26.456189] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:22:12.874 [2024-10-15 13:56:26.456336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76344 ] 00:22:12.874 [2024-10-15 13:56:26.606082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.132 [2024-10-15 13:56:26.702200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.391 [2024-10-15 13:56:26.956654] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.391 [2024-10-15 13:56:26.956719] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.391 [2024-10-15 13:56:27.110460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.391 [2024-10-15 13:56:27.110517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:13.391 [2024-10-15 13:56:27.110531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:13.391 [2024-10-15 13:56:27.110544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.391 [2024-10-15 13:56:27.110588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.391 [2024-10-15 13:56:27.110598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.391 [2024-10-15 13:56:27.110607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:13.391 [2024-10-15 13:56:27.110616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.391 [2024-10-15 13:56:27.110635] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:13.391 [2024-10-15 13:56:27.111353] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:13.391 [2024-10-15 13:56:27.111381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.391 [2024-10-15 13:56:27.111392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.391 [2024-10-15 13:56:27.111400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:22:13.391 [2024-10-15 13:56:27.111408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.391 [2024-10-15 13:56:27.112530] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:13.391 [2024-10-15 13:56:27.124779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.391 [2024-10-15 13:56:27.124820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:13.391 [2024-10-15 13:56:27.124834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.250 ms 00:22:13.391 [2024-10-15 13:56:27.124842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.391 [2024-10-15 13:56:27.124905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.391 [2024-10-15 13:56:27.124914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:13.391 [2024-10-15 13:56:27.124926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:13.391 [2024-10-15 13:56:27.124934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.391 [2024-10-15 13:56:27.130251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.391 [2024-10-15 13:56:27.130280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.391 [2024-10-15 13:56:27.130290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.252 ms 00:22:13.391 [2024-10-15 13:56:27.130298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.391 [2024-10-15 13:56:27.130370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.391 [2024-10-15 13:56:27.130379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.391 [2024-10-15 13:56:27.130386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:13.391 [2024-10-15 13:56:27.130394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.391 [2024-10-15 13:56:27.130435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.391 [2024-10-15 13:56:27.130445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:13.391 [2024-10-15 13:56:27.130453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:13.391 [2024-10-15 13:56:27.130460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.391 [2024-10-15 13:56:27.130482] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:13.391 [2024-10-15 13:56:27.133895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.391 [2024-10-15 13:56:27.133926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.391 [2024-10-15 13:56:27.133935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.418 ms 00:22:13.391 [2024-10-15 13:56:27.133942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.391 [2024-10-15 13:56:27.133974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.391 [2024-10-15 13:56:27.133982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:13.391 [2024-10-15 13:56:27.133990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:13.391 [2024-10-15 13:56:27.133998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.391 [2024-10-15 13:56:27.134017] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:13.391 [2024-10-15 13:56:27.134034] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:13.391 [2024-10-15 13:56:27.134068] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:13.391 [2024-10-15 13:56:27.134084] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:13.391 [2024-10-15 13:56:27.134188] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:13.391 [2024-10-15 13:56:27.134203] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:13.391 [2024-10-15 13:56:27.134214] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:13.391 [2024-10-15 13:56:27.134246] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:13.391 [2024-10-15 13:56:27.134255] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:13.391 [2024-10-15 13:56:27.134264] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:13.391 [2024-10-15 13:56:27.134272] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:13.391 [2024-10-15 13:56:27.134279] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:13.391 [2024-10-15 13:56:27.134286] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:13.391 [2024-10-15 13:56:27.134294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.391 [2024-10-15 13:56:27.134304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:13.391 [2024-10-15 13:56:27.134311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:22:13.391 [2024-10-15 13:56:27.134318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.391 [2024-10-15 13:56:27.134400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.391 [2024-10-15 13:56:27.134408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:13.391 [2024-10-15 13:56:27.134415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:13.391 [2024-10-15 13:56:27.134423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.391 [2024-10-15 13:56:27.134539] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:13.391 [2024-10-15 13:56:27.134557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:13.391 [2024-10-15 13:56:27.134568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.391 [2024-10-15 13:56:27.134576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.391 [2024-10-15 13:56:27.134584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:13.391 [2024-10-15 13:56:27.134591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:13.391 [2024-10-15 13:56:27.134598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:13.391 [2024-10-15 13:56:27.134605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:13.391 [2024-10-15 13:56:27.134611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:13.391 [2024-10-15 13:56:27.134618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.391 [2024-10-15 13:56:27.134626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:13.391 [2024-10-15 13:56:27.134633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:13.391 [2024-10-15 13:56:27.134639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.391 [2024-10-15 13:56:27.134646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:13.391 [2024-10-15 13:56:27.134652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:13.391 [2024-10-15 13:56:27.134666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.391 [2024-10-15 13:56:27.134673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:13.391 [2024-10-15 13:56:27.134680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:13.391 [2024-10-15 13:56:27.134686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.391 [2024-10-15 13:56:27.134693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:13.391 [2024-10-15 13:56:27.134700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:13.391 [2024-10-15 13:56:27.134707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.391 [2024-10-15 13:56:27.134713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:13.391 [2024-10-15 13:56:27.134719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:13.391 [2024-10-15 13:56:27.134726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.391 [2024-10-15 13:56:27.134732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:13.391 [2024-10-15 13:56:27.134739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:13.391 [2024-10-15 13:56:27.134745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.391 [2024-10-15 13:56:27.134751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:13.391 [2024-10-15 13:56:27.134757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:13.391 [2024-10-15 13:56:27.134764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.391 [2024-10-15 13:56:27.134770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:13.391 [2024-10-15 13:56:27.134777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:13.391 [2024-10-15 13:56:27.134783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.391 [2024-10-15 13:56:27.134790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:13.391 [2024-10-15 13:56:27.134796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:13.391 [2024-10-15 13:56:27.134802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.391 [2024-10-15 13:56:27.134808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:13.391 [2024-10-15 13:56:27.134815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:13.391 [2024-10-15 13:56:27.134821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.391 [2024-10-15 13:56:27.134827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:13.391 [2024-10-15 13:56:27.134834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:13.392 [2024-10-15 13:56:27.134840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.392 [2024-10-15 13:56:27.134847] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:13.392 [2024-10-15 13:56:27.134854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:13.392 [2024-10-15 13:56:27.134861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.392 [2024-10-15 13:56:27.134868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.392 [2024-10-15 13:56:27.134877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:13.392 [2024-10-15 13:56:27.134885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:13.392 [2024-10-15 13:56:27.134893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:13.392 [2024-10-15 13:56:27.134899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:13.392 [2024-10-15 13:56:27.134906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:13.392 [2024-10-15 13:56:27.134913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:13.392 [2024-10-15 13:56:27.134921] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:13.392 [2024-10-15 13:56:27.134931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.392 [2024-10-15 13:56:27.134939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:13.392 [2024-10-15 13:56:27.134947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:13.392 [2024-10-15 13:56:27.134954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:13.392 [2024-10-15 13:56:27.134961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:13.392 [2024-10-15 13:56:27.134968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:13.392 [2024-10-15 13:56:27.134975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:13.392 [2024-10-15 13:56:27.134982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:13.392 [2024-10-15 13:56:27.134988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:13.392 [2024-10-15 13:56:27.134995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:13.392 [2024-10-15 13:56:27.135002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:13.392 [2024-10-15 13:56:27.135009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:13.392 [2024-10-15 13:56:27.135016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:13.392 [2024-10-15 13:56:27.135023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:13.392 [2024-10-15 13:56:27.135030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:13.392 [2024-10-15 13:56:27.135037] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:13.392 [2024-10-15 13:56:27.135045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.392 [2024-10-15 13:56:27.135055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:13.392 [2024-10-15 13:56:27.135062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:13.392 [2024-10-15 13:56:27.135069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:13.392 [2024-10-15 13:56:27.135076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:13.392 [2024-10-15 13:56:27.135083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.392 [2024-10-15 13:56:27.135090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:13.392 [2024-10-15 13:56:27.135098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:22:13.392 [2024-10-15 13:56:27.135105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.392 [2024-10-15 13:56:27.161169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.392 [2024-10-15 13:56:27.161212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.392 [2024-10-15 13:56:27.161237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.015 ms 00:22:13.392 [2024-10-15 13:56:27.161245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.392 [2024-10-15 13:56:27.161330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.392 [2024-10-15 13:56:27.161342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:13.392 [2024-10-15 13:56:27.161350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:13.392 [2024-10-15 13:56:27.161357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.651 [2024-10-15 13:56:27.201995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.651 [2024-10-15 13:56:27.202046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.651 [2024-10-15 13:56:27.202059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.583 ms 00:22:13.651 [2024-10-15 13:56:27.202067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.651 [2024-10-15 13:56:27.202117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.651 [2024-10-15 13:56:27.202127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.651 [2024-10-15 13:56:27.202136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:13.651 [2024-10-15 13:56:27.202143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.651 [2024-10-15 13:56:27.202523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.651 [2024-10-15 13:56:27.202547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.651 [2024-10-15 13:56:27.202557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:22:13.651 [2024-10-15 13:56:27.202565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.651 [2024-10-15 13:56:27.202690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.651 [2024-10-15 13:56:27.202705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.651 [2024-10-15 13:56:27.202713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:22:13.651 [2024-10-15 13:56:27.202721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.651 [2024-10-15 13:56:27.215817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.651 [2024-10-15 13:56:27.215854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.651 [2024-10-15 13:56:27.215864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.056 ms 00:22:13.651 [2024-10-15 13:56:27.215874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.651 [2024-10-15 13:56:27.228147] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:13.651 [2024-10-15 13:56:27.228185] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:13.651 [2024-10-15 13:56:27.228198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.651 [2024-10-15 13:56:27.228206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:13.651 [2024-10-15 13:56:27.228215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.226 ms 00:22:13.651 [2024-10-15 13:56:27.228231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.651 [2024-10-15 13:56:27.252017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.651 [2024-10-15 13:56:27.252062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:13.651 [2024-10-15 13:56:27.252079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.748 ms 00:22:13.651 [2024-10-15 13:56:27.252086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.651 [2024-10-15 13:56:27.263739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.651 [2024-10-15 13:56:27.263781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:13.651 [2024-10-15 13:56:27.263791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.619 ms 00:22:13.651 [2024-10-15 13:56:27.263798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.651 [2024-10-15 13:56:27.274693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.651 [2024-10-15 13:56:27.274727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:13.651 [2024-10-15 13:56:27.274737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.862 ms 00:22:13.651 [2024-10-15 13:56:27.274744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.651 [2024-10-15 13:56:27.275354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.651 [2024-10-15 13:56:27.275381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:13.651 [2024-10-15 13:56:27.275390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:22:13.651 [2024-10-15 13:56:27.275398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.651 [2024-10-15 13:56:27.329995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.651 [2024-10-15 13:56:27.330050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:13.651 [2024-10-15 13:56:27.330063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.576 ms 00:22:13.651 [2024-10-15 13:56:27.330075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.651 [2024-10-15 13:56:27.340413] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:13.651 [2024-10-15 13:56:27.343040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.651 [2024-10-15 13:56:27.343071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:13.651 [2024-10-15 13:56:27.343083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.907 ms 00:22:13.651 [2024-10-15 13:56:27.343091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.651 [2024-10-15 13:56:27.343190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.652 [2024-10-15 13:56:27.343207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:13.652 [2024-10-15 13:56:27.343216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:13.652 [2024-10-15 13:56:27.343235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.652 [2024-10-15 13:56:27.344636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.652 [2024-10-15 13:56:27.344670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:13.652 [2024-10-15 13:56:27.344679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:22:13.652 [2024-10-15 13:56:27.344687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.652 [2024-10-15 13:56:27.344712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.652 [2024-10-15 13:56:27.344721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:13.652 [2024-10-15 13:56:27.344729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:13.652 [2024-10-15 13:56:27.344736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.652 [2024-10-15 13:56:27.344768] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:13.652 [2024-10-15 13:56:27.344778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.652 [2024-10-15 13:56:27.344789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:13.652 [2024-10-15 13:56:27.344797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:13.652 [2024-10-15 13:56:27.344804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.652 [2024-10-15 13:56:27.368020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.652 [2024-10-15 13:56:27.368069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:13.652 [2024-10-15 13:56:27.368081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.199 ms 00:22:13.652 [2024-10-15 13:56:27.368089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.652 [2024-10-15 13:56:27.368164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.652 [2024-10-15 13:56:27.368174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:13.652 [2024-10-15 13:56:27.368183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:13.652 [2024-10-15 13:56:27.368190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.652 [2024-10-15 13:56:27.369143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.273 ms, result 0 00:22:15.030  [2024-10-15T13:56:29.796Z] Copying: 43/1024 [MB] (43 MBps) [2024-10-15T13:56:30.731Z] Copying: 92/1024 [MB] (48 MBps) [2024-10-15T13:56:31.665Z] Copying: 141/1024 [MB] (48 MBps) [2024-10-15T13:56:32.600Z] Copying: 190/1024 [MB] (49 MBps) [2024-10-15T13:56:33.974Z] Copying: 237/1024 [MB] (46 MBps) [2024-10-15T13:56:34.916Z] Copying: 285/1024 [MB] (48 MBps) [2024-10-15T13:56:35.850Z] Copying: 335/1024 [MB] (49 MBps) [2024-10-15T13:56:36.783Z] Copying: 383/1024 [MB] (47 MBps) [2024-10-15T13:56:37.717Z] Copying: 429/1024 [MB] (46 MBps) [2024-10-15T13:56:38.650Z] Copying: 477/1024 [MB] (47 MBps) [2024-10-15T13:56:39.584Z] Copying: 523/1024 [MB] (46 MBps) [2024-10-15T13:56:40.957Z] Copying: 572/1024 [MB] (48 MBps) [2024-10-15T13:56:41.890Z] Copying: 623/1024 [MB] (51 MBps) [2024-10-15T13:56:42.823Z] Copying: 675/1024 [MB] (51 MBps) [2024-10-15T13:56:43.758Z] Copying: 723/1024 [MB] (48 MBps) [2024-10-15T13:56:44.692Z] Copying: 772/1024 [MB] (48 MBps) [2024-10-15T13:56:45.627Z] Copying: 821/1024 [MB] (49 MBps) [2024-10-15T13:56:46.629Z] Copying: 868/1024 [MB] (46 MBps) [2024-10-15T13:56:47.563Z] Copying: 912/1024 [MB] (44 MBps) [2024-10-15T13:56:48.937Z] Copying: 950/1024 [MB] (38 MBps) [2024-10-15T13:56:49.871Z] Copying: 986/1024 [MB] (35 MBps) [2024-10-15T13:56:50.129Z] Copying: 1011/1024 [MB] (24 MBps) [2024-10-15T13:56:51.061Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-10-15 13:56:50.869100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.273 [2024-10-15 13:56:50.869166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:37.273 [2024-10-15 13:56:50.869181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:37.273 [2024-10-15 13:56:50.869191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.273 [2024-10-15 13:56:50.869212] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:37.273 [2024-10-15 13:56:50.871848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.273 [2024-10-15 13:56:50.871878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:37.273 [2024-10-15 13:56:50.871889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.591 ms 00:22:37.273 [2024-10-15 13:56:50.871898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.273 [2024-10-15 13:56:50.872128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.273 [2024-10-15 13:56:50.872144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:37.273 [2024-10-15 13:56:50.872154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:22:37.273 [2024-10-15 13:56:50.872161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.273 [2024-10-15 13:56:50.876538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.273 [2024-10-15 13:56:50.876572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:37.273 [2024-10-15 13:56:50.876582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.362 ms 00:22:37.273 [2024-10-15 13:56:50.876590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.273 [2024-10-15 13:56:50.882872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.273 [2024-10-15 13:56:50.882913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:37.273 [2024-10-15 13:56:50.882924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.197 ms 00:22:37.273 [2024-10-15 13:56:50.882932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.273 [2024-10-15 13:56:50.909635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.273 [2024-10-15 13:56:50.909682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:37.273 [2024-10-15 13:56:50.909695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.634 ms 00:22:37.273 [2024-10-15 13:56:50.909702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.273 [2024-10-15 13:56:50.924267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.273 [2024-10-15 13:56:50.924318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:37.273 [2024-10-15 13:56:50.924336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.522 ms 00:22:37.273 [2024-10-15 13:56:50.924346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.273 [2024-10-15 13:56:51.050897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.273 [2024-10-15 13:56:51.050974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:37.273 [2024-10-15 13:56:51.050986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 126.496 ms 00:22:37.273 [2024-10-15 13:56:51.050995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.532 [2024-10-15 13:56:51.075090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.532 [2024-10-15 13:56:51.075136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:37.532 [2024-10-15 13:56:51.075150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.079 ms 00:22:37.532 [2024-10-15 13:56:51.075158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.532 [2024-10-15 13:56:51.097976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.532 [2024-10-15 13:56:51.098019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:37.532 [2024-10-15 13:56:51.098041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.778 ms 00:22:37.532 [2024-10-15 13:56:51.098050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.532 [2024-10-15 13:56:51.120102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.532 [2024-10-15 13:56:51.120145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:37.532 [2024-10-15 13:56:51.120157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.014 ms 00:22:37.532 [2024-10-15 13:56:51.120164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.532 [2024-10-15 13:56:51.143161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.532 [2024-10-15 13:56:51.143210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:37.532 [2024-10-15 13:56:51.143229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.927 ms 00:22:37.532 [2024-10-15 13:56:51.143237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.532 [2024-10-15 13:56:51.143275] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:37.532 [2024-10-15 13:56:51.143290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:37.532 [2024-10-15 13:56:51.143301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:37.532 [2024-10-15 13:56:51.143669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.143997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.144005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.144012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.144020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.144029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.144036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.144044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.144052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:37.533 [2024-10-15 13:56:51.144075] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:37.533 [2024-10-15 13:56:51.144083] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8526d0e-2f90-448e-896b-b14df4f43d8b 00:22:37.533 [2024-10-15 13:56:51.144091] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:37.533 [2024-10-15 13:56:51.144099] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 12480 00:22:37.533 [2024-10-15 13:56:51.144111] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 11520 00:22:37.533 [2024-10-15 13:56:51.144119] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0833 00:22:37.533 [2024-10-15 13:56:51.144126] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:37.533 [2024-10-15 13:56:51.144134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:37.533 [2024-10-15 13:56:51.144142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:37.533 [2024-10-15 13:56:51.144155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:37.533 [2024-10-15 13:56:51.144162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:37.533 [2024-10-15 13:56:51.144169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.533 [2024-10-15 13:56:51.144180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:37.533 [2024-10-15 13:56:51.144189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:22:37.533 [2024-10-15 13:56:51.144196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.533 [2024-10-15 13:56:51.157679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.533 [2024-10-15 13:56:51.157728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:37.533 [2024-10-15 13:56:51.157740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.464 ms 00:22:37.533 [2024-10-15 13:56:51.157749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.533 [2024-10-15 13:56:51.158146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.533 [2024-10-15 13:56:51.158165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:37.533 [2024-10-15 13:56:51.158175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:22:37.533 [2024-10-15 13:56:51.158183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.533 [2024-10-15 13:56:51.190796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.533 [2024-10-15 13:56:51.190846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:37.533 [2024-10-15 13:56:51.190859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.533 [2024-10-15 13:56:51.190871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.533 [2024-10-15 13:56:51.190936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.533 [2024-10-15 13:56:51.190945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:37.533 [2024-10-15 13:56:51.190953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.533 [2024-10-15 13:56:51.190960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.533 [2024-10-15 13:56:51.191038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.533 [2024-10-15 13:56:51.191050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:37.533 [2024-10-15 13:56:51.191059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.533 [2024-10-15 13:56:51.191066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.533 [2024-10-15 13:56:51.191084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.533 [2024-10-15 13:56:51.191093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:37.533 [2024-10-15 13:56:51.191101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.533 [2024-10-15 13:56:51.191109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.533 [2024-10-15 13:56:51.268886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.533 [2024-10-15 13:56:51.268937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:37.533 [2024-10-15 13:56:51.268949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.533 [2024-10-15 13:56:51.268963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.791 [2024-10-15 13:56:51.332094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.792 [2024-10-15 13:56:51.332144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:37.792 [2024-10-15 13:56:51.332155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.792 [2024-10-15 13:56:51.332163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.792 [2024-10-15 13:56:51.332211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.792 [2024-10-15 13:56:51.332354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:37.792 [2024-10-15 13:56:51.332369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.792 [2024-10-15 13:56:51.332377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.792 [2024-10-15 13:56:51.332427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.792 [2024-10-15 13:56:51.332442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:37.792 [2024-10-15 13:56:51.332451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.792 [2024-10-15 13:56:51.332458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.792 [2024-10-15 13:56:51.332546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.792 [2024-10-15 13:56:51.332556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:37.792 [2024-10-15 13:56:51.332564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.792 [2024-10-15 13:56:51.332572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.792 [2024-10-15 13:56:51.332602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.792 [2024-10-15 13:56:51.332614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:37.792 [2024-10-15 13:56:51.332622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.792 [2024-10-15 13:56:51.332629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.792 [2024-10-15 13:56:51.332663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.792 [2024-10-15 13:56:51.332672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:37.792 [2024-10-15 13:56:51.332680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.792 [2024-10-15 13:56:51.332687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.792 [2024-10-15 13:56:51.332725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.792 [2024-10-15 13:56:51.332737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:37.792 [2024-10-15 13:56:51.332745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.792 [2024-10-15 13:56:51.332754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.792 [2024-10-15 13:56:51.332861] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 463.736 ms, result 0 00:22:38.359 00:22:38.359 00:22:38.359 13:56:52 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:40.891 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:40.891 13:56:54 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:40.891 13:56:54 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:40.891 13:56:54 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:40.891 13:56:54 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:40.891 13:56:54 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:40.891 13:56:54 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74408 00:22:40.891 13:56:54 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74408 ']' 00:22:40.891 13:56:54 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74408 00:22:40.891 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74408) - No such process 00:22:40.891 Process with pid 74408 is not found 00:22:40.891 Remove shared memory files 00:22:40.891 13:56:54 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 74408 is not found' 00:22:40.891 13:56:54 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:40.891 13:56:54 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:40.891 13:56:54 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:40.891 13:56:54 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:40.891 13:56:54 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:40.891 13:56:54 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:40.891 13:56:54 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:40.891 00:22:40.891 real 3m34.436s 00:22:40.891 user 3m23.341s 00:22:40.891 sys 0m11.561s 00:22:40.891 13:56:54 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:40.891 13:56:54 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:40.891 ************************************ 00:22:40.891 END TEST ftl_restore 00:22:40.891 ************************************ 00:22:40.891 13:56:54 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:40.891 13:56:54 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:40.891 13:56:54 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:40.891 13:56:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:40.891 ************************************ 00:22:40.891 START TEST ftl_dirty_shutdown 00:22:40.891 ************************************ 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:40.891 * Looking for test storage... 00:22:40.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:40.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.891 --rc genhtml_branch_coverage=1 00:22:40.891 --rc genhtml_function_coverage=1 00:22:40.891 --rc genhtml_legend=1 00:22:40.891 --rc geninfo_all_blocks=1 00:22:40.891 --rc geninfo_unexecuted_blocks=1 00:22:40.891 00:22:40.891 ' 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:40.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.891 --rc genhtml_branch_coverage=1 00:22:40.891 --rc genhtml_function_coverage=1 00:22:40.891 --rc genhtml_legend=1 00:22:40.891 --rc geninfo_all_blocks=1 00:22:40.891 --rc geninfo_unexecuted_blocks=1 00:22:40.891 00:22:40.891 ' 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:40.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.891 --rc genhtml_branch_coverage=1 00:22:40.891 --rc genhtml_function_coverage=1 00:22:40.891 --rc genhtml_legend=1 00:22:40.891 --rc geninfo_all_blocks=1 00:22:40.891 --rc geninfo_unexecuted_blocks=1 00:22:40.891 00:22:40.891 ' 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:40.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.891 --rc genhtml_branch_coverage=1 00:22:40.891 --rc genhtml_function_coverage=1 00:22:40.891 --rc genhtml_legend=1 00:22:40.891 --rc geninfo_all_blocks=1 00:22:40.891 --rc geninfo_unexecuted_blocks=1 00:22:40.891 00:22:40.891 ' 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:40.891 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=76706 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76706 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 76706 ']' 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.892 13:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:40.892 [2024-10-15 13:56:54.566738] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:22:40.892 [2024-10-15 13:56:54.566835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76706 ] 00:22:41.150 [2024-10-15 13:56:54.711570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.150 [2024-10-15 13:56:54.809802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.717 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.717 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:41.717 13:56:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:41.717 13:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:41.717 13:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:41.717 13:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:41.717 13:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:41.717 13:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:41.975 13:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:41.975 13:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:41.975 13:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:41.975 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:41.975 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:41.975 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:41.975 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:41.975 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:42.233 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:42.233 { 00:22:42.233 "name": "nvme0n1", 00:22:42.233 "aliases": [ 00:22:42.233 "237b255e-04d7-453f-9f8c-0a9c927e0c43" 00:22:42.233 ], 00:22:42.233 "product_name": "NVMe disk", 00:22:42.233 "block_size": 4096, 00:22:42.233 "num_blocks": 1310720, 00:22:42.233 "uuid": "237b255e-04d7-453f-9f8c-0a9c927e0c43", 00:22:42.233 "numa_id": -1, 00:22:42.233 "assigned_rate_limits": { 00:22:42.233 "rw_ios_per_sec": 0, 00:22:42.233 "rw_mbytes_per_sec": 0, 00:22:42.233 "r_mbytes_per_sec": 0, 00:22:42.233 "w_mbytes_per_sec": 0 00:22:42.233 }, 00:22:42.233 "claimed": true, 00:22:42.233 "claim_type": "read_many_write_one", 00:22:42.233 "zoned": false, 00:22:42.233 "supported_io_types": { 00:22:42.233 "read": true, 00:22:42.233 "write": true, 00:22:42.233 "unmap": true, 00:22:42.233 "flush": true, 00:22:42.233 "reset": true, 00:22:42.233 "nvme_admin": true, 00:22:42.233 "nvme_io": true, 00:22:42.233 "nvme_io_md": false, 00:22:42.233 "write_zeroes": true, 00:22:42.233 "zcopy": false, 00:22:42.233 "get_zone_info": false, 00:22:42.233 "zone_management": false, 00:22:42.233 "zone_append": false, 00:22:42.233 "compare": true, 00:22:42.233 "compare_and_write": false, 00:22:42.233 "abort": true, 00:22:42.233 "seek_hole": false, 00:22:42.233 "seek_data": false, 00:22:42.233 "copy": true, 00:22:42.233 "nvme_iov_md": false 00:22:42.233 }, 00:22:42.233 "driver_specific": { 00:22:42.233 "nvme": [ 00:22:42.233 { 00:22:42.233 "pci_address": "0000:00:11.0", 00:22:42.233 "trid": { 00:22:42.233 "trtype": "PCIe", 00:22:42.233 "traddr": "0000:00:11.0" 00:22:42.233 }, 00:22:42.233 "ctrlr_data": { 00:22:42.233 "cntlid": 0, 00:22:42.233 "vendor_id": "0x1b36", 00:22:42.233 "model_number": "QEMU NVMe Ctrl", 00:22:42.233 "serial_number": "12341", 00:22:42.233 "firmware_revision": "8.0.0", 00:22:42.233 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:42.233 "oacs": { 00:22:42.233 "security": 0, 00:22:42.233 "format": 1, 00:22:42.233 "firmware": 0, 00:22:42.233 "ns_manage": 1 00:22:42.233 }, 00:22:42.233 "multi_ctrlr": false, 00:22:42.233 "ana_reporting": false 00:22:42.233 }, 00:22:42.233 "vs": { 00:22:42.233 "nvme_version": "1.4" 00:22:42.233 }, 00:22:42.233 "ns_data": { 00:22:42.233 "id": 1, 00:22:42.233 "can_share": false 00:22:42.233 } 00:22:42.233 } 00:22:42.233 ], 00:22:42.233 "mp_policy": "active_passive" 00:22:42.233 } 00:22:42.233 } 00:22:42.233 ]' 00:22:42.233 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:42.233 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:42.233 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:42.233 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:42.233 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:42.233 13:56:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:42.233 13:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:42.233 13:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:42.233 13:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:42.233 13:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:42.233 13:56:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:42.492 13:56:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=64cd6279-e6ee-48ed-a131-59553f91ad33 00:22:42.492 13:56:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:42.492 13:56:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 64cd6279-e6ee-48ed-a131-59553f91ad33 00:22:42.750 13:56:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=5c4a237e-3880-49ee-b038-5526a1d2f272 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5c4a237e-3880-49ee-b038-5526a1d2f272 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=dce9b145-0270-4c88-9122-4e3c847ddc7a 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dce9b145-0270-4c88-9122-4e3c847ddc7a 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=dce9b145-0270-4c88-9122-4e3c847ddc7a 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size dce9b145-0270-4c88-9122-4e3c847ddc7a 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=dce9b145-0270-4c88-9122-4e3c847ddc7a 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:43.009 13:56:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dce9b145-0270-4c88-9122-4e3c847ddc7a 00:22:43.268 13:56:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:43.268 { 00:22:43.268 "name": "dce9b145-0270-4c88-9122-4e3c847ddc7a", 00:22:43.268 "aliases": [ 00:22:43.268 "lvs/nvme0n1p0" 00:22:43.268 ], 00:22:43.268 "product_name": "Logical Volume", 00:22:43.268 "block_size": 4096, 00:22:43.268 "num_blocks": 26476544, 00:22:43.268 "uuid": "dce9b145-0270-4c88-9122-4e3c847ddc7a", 00:22:43.268 "assigned_rate_limits": { 00:22:43.268 "rw_ios_per_sec": 0, 00:22:43.268 "rw_mbytes_per_sec": 0, 00:22:43.268 "r_mbytes_per_sec": 0, 00:22:43.268 "w_mbytes_per_sec": 0 00:22:43.268 }, 00:22:43.268 "claimed": false, 00:22:43.268 "zoned": false, 00:22:43.268 "supported_io_types": { 00:22:43.268 "read": true, 00:22:43.268 "write": true, 00:22:43.268 "unmap": true, 00:22:43.268 "flush": false, 00:22:43.268 "reset": true, 00:22:43.268 "nvme_admin": false, 00:22:43.268 "nvme_io": false, 00:22:43.268 "nvme_io_md": false, 00:22:43.268 "write_zeroes": true, 00:22:43.268 "zcopy": false, 00:22:43.268 "get_zone_info": false, 00:22:43.268 "zone_management": false, 00:22:43.268 "zone_append": false, 00:22:43.268 "compare": false, 00:22:43.268 "compare_and_write": false, 00:22:43.268 "abort": false, 00:22:43.268 "seek_hole": true, 00:22:43.268 "seek_data": true, 00:22:43.268 "copy": false, 00:22:43.268 "nvme_iov_md": false 00:22:43.268 }, 00:22:43.268 "driver_specific": { 00:22:43.268 "lvol": { 00:22:43.268 "lvol_store_uuid": "5c4a237e-3880-49ee-b038-5526a1d2f272", 00:22:43.268 "base_bdev": "nvme0n1", 00:22:43.268 "thin_provision": true, 00:22:43.268 "num_allocated_clusters": 0, 00:22:43.268 "snapshot": false, 00:22:43.268 "clone": false, 00:22:43.268 "esnap_clone": false 00:22:43.268 } 00:22:43.268 } 00:22:43.268 } 00:22:43.268 ]' 00:22:43.268 13:56:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:43.268 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:43.268 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:43.268 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:43.268 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:43.268 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:43.268 13:56:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:43.268 13:56:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:43.268 13:56:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size dce9b145-0270-4c88-9122-4e3c847ddc7a 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=dce9b145-0270-4c88-9122-4e3c847ddc7a 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dce9b145-0270-4c88-9122-4e3c847ddc7a 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:43.835 { 00:22:43.835 "name": "dce9b145-0270-4c88-9122-4e3c847ddc7a", 00:22:43.835 "aliases": [ 00:22:43.835 "lvs/nvme0n1p0" 00:22:43.835 ], 00:22:43.835 "product_name": "Logical Volume", 00:22:43.835 "block_size": 4096, 00:22:43.835 "num_blocks": 26476544, 00:22:43.835 "uuid": "dce9b145-0270-4c88-9122-4e3c847ddc7a", 00:22:43.835 "assigned_rate_limits": { 00:22:43.835 "rw_ios_per_sec": 0, 00:22:43.835 "rw_mbytes_per_sec": 0, 00:22:43.835 "r_mbytes_per_sec": 0, 00:22:43.835 "w_mbytes_per_sec": 0 00:22:43.835 }, 00:22:43.835 "claimed": false, 00:22:43.835 "zoned": false, 00:22:43.835 "supported_io_types": { 00:22:43.835 "read": true, 00:22:43.835 "write": true, 00:22:43.835 "unmap": true, 00:22:43.835 "flush": false, 00:22:43.835 "reset": true, 00:22:43.835 "nvme_admin": false, 00:22:43.835 "nvme_io": false, 00:22:43.835 "nvme_io_md": false, 00:22:43.835 "write_zeroes": true, 00:22:43.835 "zcopy": false, 00:22:43.835 "get_zone_info": false, 00:22:43.835 "zone_management": false, 00:22:43.835 "zone_append": false, 00:22:43.835 "compare": false, 00:22:43.835 "compare_and_write": false, 00:22:43.835 "abort": false, 00:22:43.835 "seek_hole": true, 00:22:43.835 "seek_data": true, 00:22:43.835 "copy": false, 00:22:43.835 "nvme_iov_md": false 00:22:43.835 }, 00:22:43.835 "driver_specific": { 00:22:43.835 "lvol": { 00:22:43.835 "lvol_store_uuid": "5c4a237e-3880-49ee-b038-5526a1d2f272", 00:22:43.835 "base_bdev": "nvme0n1", 00:22:43.835 "thin_provision": true, 00:22:43.835 "num_allocated_clusters": 0, 00:22:43.835 "snapshot": false, 00:22:43.835 "clone": false, 00:22:43.835 "esnap_clone": false 00:22:43.835 } 00:22:43.835 } 00:22:43.835 } 00:22:43.835 ]' 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:43.835 13:56:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:44.094 13:56:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:44.094 13:56:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size dce9b145-0270-4c88-9122-4e3c847ddc7a 00:22:44.094 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=dce9b145-0270-4c88-9122-4e3c847ddc7a 00:22:44.094 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:44.094 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:44.094 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:44.094 13:56:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dce9b145-0270-4c88-9122-4e3c847ddc7a 00:22:44.371 13:56:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:44.371 { 00:22:44.371 "name": "dce9b145-0270-4c88-9122-4e3c847ddc7a", 00:22:44.371 "aliases": [ 00:22:44.371 "lvs/nvme0n1p0" 00:22:44.371 ], 00:22:44.371 "product_name": "Logical Volume", 00:22:44.371 "block_size": 4096, 00:22:44.371 "num_blocks": 26476544, 00:22:44.371 "uuid": "dce9b145-0270-4c88-9122-4e3c847ddc7a", 00:22:44.371 "assigned_rate_limits": { 00:22:44.371 "rw_ios_per_sec": 0, 00:22:44.371 "rw_mbytes_per_sec": 0, 00:22:44.371 "r_mbytes_per_sec": 0, 00:22:44.371 "w_mbytes_per_sec": 0 00:22:44.371 }, 00:22:44.371 "claimed": false, 00:22:44.371 "zoned": false, 00:22:44.371 "supported_io_types": { 00:22:44.371 "read": true, 00:22:44.371 "write": true, 00:22:44.371 "unmap": true, 00:22:44.371 "flush": false, 00:22:44.371 "reset": true, 00:22:44.371 "nvme_admin": false, 00:22:44.371 "nvme_io": false, 00:22:44.371 "nvme_io_md": false, 00:22:44.371 "write_zeroes": true, 00:22:44.371 "zcopy": false, 00:22:44.371 "get_zone_info": false, 00:22:44.371 "zone_management": false, 00:22:44.371 "zone_append": false, 00:22:44.371 "compare": false, 00:22:44.371 "compare_and_write": false, 00:22:44.371 "abort": false, 00:22:44.371 "seek_hole": true, 00:22:44.371 "seek_data": true, 00:22:44.371 "copy": false, 00:22:44.371 "nvme_iov_md": false 00:22:44.371 }, 00:22:44.371 "driver_specific": { 00:22:44.372 "lvol": { 00:22:44.372 "lvol_store_uuid": "5c4a237e-3880-49ee-b038-5526a1d2f272", 00:22:44.372 "base_bdev": "nvme0n1", 00:22:44.372 "thin_provision": true, 00:22:44.372 "num_allocated_clusters": 0, 00:22:44.372 "snapshot": false, 00:22:44.372 "clone": false, 00:22:44.372 "esnap_clone": false 00:22:44.372 } 00:22:44.372 } 00:22:44.372 } 00:22:44.372 ]' 00:22:44.372 13:56:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:44.372 13:56:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:44.372 13:56:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:44.372 13:56:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:44.372 13:56:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:44.372 13:56:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:44.372 13:56:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:44.372 13:56:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d dce9b145-0270-4c88-9122-4e3c847ddc7a --l2p_dram_limit 10' 00:22:44.372 13:56:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:44.372 13:56:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:44.372 13:56:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:44.372 13:56:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dce9b145-0270-4c88-9122-4e3c847ddc7a --l2p_dram_limit 10 -c nvc0n1p0 00:22:44.640 [2024-10-15 13:56:58.219204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.640 [2024-10-15 13:56:58.219260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:44.640 [2024-10-15 13:56:58.219274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:44.640 [2024-10-15 13:56:58.219280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.640 [2024-10-15 13:56:58.219332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.640 [2024-10-15 13:56:58.219341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:44.640 [2024-10-15 13:56:58.219349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:44.640 [2024-10-15 13:56:58.219355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.640 [2024-10-15 13:56:58.219375] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:44.640 [2024-10-15 13:56:58.219998] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:44.640 [2024-10-15 13:56:58.220015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.640 [2024-10-15 13:56:58.220021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:44.640 [2024-10-15 13:56:58.220029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:22:44.640 [2024-10-15 13:56:58.220035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.640 [2024-10-15 13:56:58.220062] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 53c34892-29d3-4e41-9c0d-fabe059a629d 00:22:44.640 [2024-10-15 13:56:58.221298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.640 [2024-10-15 13:56:58.221389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:44.640 [2024-10-15 13:56:58.221430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:44.640 [2024-10-15 13:56:58.221451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.640 [2024-10-15 13:56:58.226770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.640 [2024-10-15 13:56:58.226877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:44.640 [2024-10-15 13:56:58.226920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.248 ms 00:22:44.640 [2024-10-15 13:56:58.226939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.640 [2024-10-15 13:56:58.227020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.640 [2024-10-15 13:56:58.227342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:44.640 [2024-10-15 13:56:58.227379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:44.640 [2024-10-15 13:56:58.227401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.640 [2024-10-15 13:56:58.227476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.640 [2024-10-15 13:56:58.227501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:44.640 [2024-10-15 13:56:58.227517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:44.640 [2024-10-15 13:56:58.227533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.640 [2024-10-15 13:56:58.227599] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:44.640 [2024-10-15 13:56:58.230568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.640 [2024-10-15 13:56:58.230656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:44.640 [2024-10-15 13:56:58.230700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:22:44.640 [2024-10-15 13:56:58.230719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.640 [2024-10-15 13:56:58.230756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.640 [2024-10-15 13:56:58.230801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:44.640 [2024-10-15 13:56:58.230820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:44.640 [2024-10-15 13:56:58.230835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.640 [2024-10-15 13:56:58.230877] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:44.640 [2024-10-15 13:56:58.231004] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:44.640 [2024-10-15 13:56:58.231064] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:44.640 [2024-10-15 13:56:58.231074] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:44.640 [2024-10-15 13:56:58.231083] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:44.640 [2024-10-15 13:56:58.231090] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:44.640 [2024-10-15 13:56:58.231098] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:44.640 [2024-10-15 13:56:58.231103] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:44.640 [2024-10-15 13:56:58.231111] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:44.640 [2024-10-15 13:56:58.231116] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:44.640 [2024-10-15 13:56:58.231124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.640 [2024-10-15 13:56:58.231132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:44.640 [2024-10-15 13:56:58.231139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:22:44.640 [2024-10-15 13:56:58.231150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.640 [2024-10-15 13:56:58.231236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.640 [2024-10-15 13:56:58.231243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:44.640 [2024-10-15 13:56:58.231251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:44.640 [2024-10-15 13:56:58.231256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.640 [2024-10-15 13:56:58.231346] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:44.640 [2024-10-15 13:56:58.231355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:44.640 [2024-10-15 13:56:58.231364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:44.640 [2024-10-15 13:56:58.231369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.640 [2024-10-15 13:56:58.231377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:44.640 [2024-10-15 13:56:58.231382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:44.640 [2024-10-15 13:56:58.231389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:44.640 [2024-10-15 13:56:58.231394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:44.640 [2024-10-15 13:56:58.231400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:44.640 [2024-10-15 13:56:58.231405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:44.640 [2024-10-15 13:56:58.231412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:44.640 [2024-10-15 13:56:58.231417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:44.640 [2024-10-15 13:56:58.231423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:44.640 [2024-10-15 13:56:58.231428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:44.640 [2024-10-15 13:56:58.231434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:44.640 [2024-10-15 13:56:58.231439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.640 [2024-10-15 13:56:58.231447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:44.640 [2024-10-15 13:56:58.231452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:44.640 [2024-10-15 13:56:58.231459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.640 [2024-10-15 13:56:58.231464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:44.640 [2024-10-15 13:56:58.231471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:44.640 [2024-10-15 13:56:58.231477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.640 [2024-10-15 13:56:58.231483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:44.640 [2024-10-15 13:56:58.231488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:44.640 [2024-10-15 13:56:58.231495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.640 [2024-10-15 13:56:58.231500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:44.640 [2024-10-15 13:56:58.231506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:44.640 [2024-10-15 13:56:58.231511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.640 [2024-10-15 13:56:58.231517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:44.640 [2024-10-15 13:56:58.231523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:44.640 [2024-10-15 13:56:58.231529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.640 [2024-10-15 13:56:58.231534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:44.640 [2024-10-15 13:56:58.231541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:44.640 [2024-10-15 13:56:58.231546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:44.640 [2024-10-15 13:56:58.231552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:44.640 [2024-10-15 13:56:58.231557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:44.640 [2024-10-15 13:56:58.231563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:44.640 [2024-10-15 13:56:58.231568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:44.640 [2024-10-15 13:56:58.231574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:44.640 [2024-10-15 13:56:58.231579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.640 [2024-10-15 13:56:58.231585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:44.640 [2024-10-15 13:56:58.231590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:44.640 [2024-10-15 13:56:58.231596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.641 [2024-10-15 13:56:58.231601] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:44.641 [2024-10-15 13:56:58.231608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:44.641 [2024-10-15 13:56:58.231614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:44.641 [2024-10-15 13:56:58.231621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.641 [2024-10-15 13:56:58.231627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:44.641 [2024-10-15 13:56:58.231634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:44.641 [2024-10-15 13:56:58.231639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:44.641 [2024-10-15 13:56:58.231646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:44.641 [2024-10-15 13:56:58.231651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:44.641 [2024-10-15 13:56:58.231658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:44.641 [2024-10-15 13:56:58.231666] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:44.641 [2024-10-15 13:56:58.231675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:44.641 [2024-10-15 13:56:58.231682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:44.641 [2024-10-15 13:56:58.231689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:44.641 [2024-10-15 13:56:58.231695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:44.641 [2024-10-15 13:56:58.231702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:44.641 [2024-10-15 13:56:58.231707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:44.641 [2024-10-15 13:56:58.231714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:44.641 [2024-10-15 13:56:58.231720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:44.641 [2024-10-15 13:56:58.231726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:44.641 [2024-10-15 13:56:58.231731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:44.641 [2024-10-15 13:56:58.231740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:44.641 [2024-10-15 13:56:58.231745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:44.641 [2024-10-15 13:56:58.231752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:44.641 [2024-10-15 13:56:58.231758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:44.641 [2024-10-15 13:56:58.231764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:44.641 [2024-10-15 13:56:58.231770] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:44.641 [2024-10-15 13:56:58.231778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:44.641 [2024-10-15 13:56:58.231785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:44.641 [2024-10-15 13:56:58.231792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:44.641 [2024-10-15 13:56:58.231798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:44.641 [2024-10-15 13:56:58.231804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:44.641 [2024-10-15 13:56:58.231810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.641 [2024-10-15 13:56:58.231817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:44.641 [2024-10-15 13:56:58.231822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:22:44.641 [2024-10-15 13:56:58.231828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.641 [2024-10-15 13:56:58.231858] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:44.641 [2024-10-15 13:56:58.231867] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:47.924 [2024-10-15 13:57:01.022365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.924 [2024-10-15 13:57:01.022430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:47.924 [2024-10-15 13:57:01.022446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2790.495 ms 00:22:47.924 [2024-10-15 13:57:01.022456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.924 [2024-10-15 13:57:01.047969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.924 [2024-10-15 13:57:01.048026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:47.924 [2024-10-15 13:57:01.048039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.317 ms 00:22:47.924 [2024-10-15 13:57:01.048050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.924 [2024-10-15 13:57:01.048210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.924 [2024-10-15 13:57:01.048245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:47.924 [2024-10-15 13:57:01.048255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:47.924 [2024-10-15 13:57:01.048267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.924 [2024-10-15 13:57:01.078917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.924 [2024-10-15 13:57:01.079124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:47.924 [2024-10-15 13:57:01.079141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.614 ms 00:22:47.924 [2024-10-15 13:57:01.079151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.924 [2024-10-15 13:57:01.079191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.924 [2024-10-15 13:57:01.079204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:47.924 [2024-10-15 13:57:01.079212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:47.924 [2024-10-15 13:57:01.079240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.924 [2024-10-15 13:57:01.079617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.924 [2024-10-15 13:57:01.079634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:47.924 [2024-10-15 13:57:01.079643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:22:47.924 [2024-10-15 13:57:01.079652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.924 [2024-10-15 13:57:01.079771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.924 [2024-10-15 13:57:01.079781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:47.924 [2024-10-15 13:57:01.079789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:47.924 [2024-10-15 13:57:01.079799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.924 [2024-10-15 13:57:01.093705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.924 [2024-10-15 13:57:01.093869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:47.924 [2024-10-15 13:57:01.093884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.884 ms 00:22:47.924 [2024-10-15 13:57:01.093896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.924 [2024-10-15 13:57:01.105164] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:47.924 [2024-10-15 13:57:01.107988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.924 [2024-10-15 13:57:01.108017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:47.924 [2024-10-15 13:57:01.108031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.004 ms 00:22:47.924 [2024-10-15 13:57:01.108039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.924 [2024-10-15 13:57:01.186204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.924 [2024-10-15 13:57:01.186443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:47.924 [2024-10-15 13:57:01.186469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.129 ms 00:22:47.924 [2024-10-15 13:57:01.186479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.924 [2024-10-15 13:57:01.186665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.924 [2024-10-15 13:57:01.186677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:47.924 [2024-10-15 13:57:01.186690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:22:47.924 [2024-10-15 13:57:01.186700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.924 [2024-10-15 13:57:01.210214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.924 [2024-10-15 13:57:01.210271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:47.924 [2024-10-15 13:57:01.210286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.448 ms 00:22:47.925 [2024-10-15 13:57:01.210295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.925 [2024-10-15 13:57:01.232607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.925 [2024-10-15 13:57:01.232644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:47.925 [2024-10-15 13:57:01.232658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.263 ms 00:22:47.925 [2024-10-15 13:57:01.232665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.925 [2024-10-15 13:57:01.233253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.925 [2024-10-15 13:57:01.233269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:47.925 [2024-10-15 13:57:01.233280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:22:47.925 [2024-10-15 13:57:01.233287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.925 [2024-10-15 13:57:01.301674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.925 [2024-10-15 13:57:01.301870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:47.925 [2024-10-15 13:57:01.301895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.348 ms 00:22:47.925 [2024-10-15 13:57:01.301904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.925 [2024-10-15 13:57:01.326151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.925 [2024-10-15 13:57:01.326192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:47.925 [2024-10-15 13:57:01.326209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.180 ms 00:22:47.925 [2024-10-15 13:57:01.326217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.925 [2024-10-15 13:57:01.349705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.925 [2024-10-15 13:57:01.349743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:47.925 [2024-10-15 13:57:01.349757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.447 ms 00:22:47.925 [2024-10-15 13:57:01.349765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.925 [2024-10-15 13:57:01.372706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.925 [2024-10-15 13:57:01.372863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:47.925 [2024-10-15 13:57:01.372883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.916 ms 00:22:47.925 [2024-10-15 13:57:01.372890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.925 [2024-10-15 13:57:01.372922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.925 [2024-10-15 13:57:01.372930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:47.925 [2024-10-15 13:57:01.372942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:47.925 [2024-10-15 13:57:01.372950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.925 [2024-10-15 13:57:01.373027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.925 [2024-10-15 13:57:01.373036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:47.925 [2024-10-15 13:57:01.373047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:47.925 [2024-10-15 13:57:01.373054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.925 [2024-10-15 13:57:01.373899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3154.295 ms, result 0 00:22:47.925 { 00:22:47.925 "name": "ftl0", 00:22:47.925 "uuid": "53c34892-29d3-4e41-9c0d-fabe059a629d" 00:22:47.925 } 00:22:47.925 13:57:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:47.925 13:57:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:47.925 13:57:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:47.925 13:57:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:47.925 13:57:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:48.184 /dev/nbd0 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:48.184 1+0 records in 00:22:48.184 1+0 records out 00:22:48.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363758 s, 11.3 MB/s 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:22:48.184 13:57:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:48.184 [2024-10-15 13:57:01.862630] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:22:48.184 [2024-10-15 13:57:01.863245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76837 ] 00:22:48.442 [2024-10-15 13:57:02.004265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.442 [2024-10-15 13:57:02.106252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.818  [2024-10-15T13:57:04.541Z] Copying: 195/1024 [MB] (195 MBps) [2024-10-15T13:57:05.475Z] Copying: 392/1024 [MB] (196 MBps) [2024-10-15T13:57:06.411Z] Copying: 588/1024 [MB] (196 MBps) [2024-10-15T13:57:07.345Z] Copying: 838/1024 [MB] (250 MBps) [2024-10-15T13:57:07.912Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:22:54.124 00:22:54.124 13:57:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:56.063 13:57:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:56.063 [2024-10-15 13:57:09.710235] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:22:56.063 [2024-10-15 13:57:09.710328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76924 ] 00:22:56.321 [2024-10-15 13:57:09.856245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.321 [2024-10-15 13:57:09.956797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.696  [2024-10-15T13:57:12.418Z] Copying: 29/1024 [MB] (29 MBps) [2024-10-15T13:57:13.351Z] Copying: 61/1024 [MB] (32 MBps) [2024-10-15T13:57:14.284Z] Copying: 90/1024 [MB] (28 MBps) [2024-10-15T13:57:15.217Z] Copying: 115/1024 [MB] (25 MBps) [2024-10-15T13:57:16.590Z] Copying: 144/1024 [MB] (28 MBps) [2024-10-15T13:57:17.525Z] Copying: 175/1024 [MB] (30 MBps) [2024-10-15T13:57:18.463Z] Copying: 202/1024 [MB] (27 MBps) [2024-10-15T13:57:19.397Z] Copying: 230/1024 [MB] (27 MBps) [2024-10-15T13:57:20.330Z] Copying: 261/1024 [MB] (30 MBps) [2024-10-15T13:57:21.264Z] Copying: 290/1024 [MB] (29 MBps) [2024-10-15T13:57:22.206Z] Copying: 321/1024 [MB] (30 MBps) [2024-10-15T13:57:23.594Z] Copying: 350/1024 [MB] (29 MBps) [2024-10-15T13:57:24.527Z] Copying: 380/1024 [MB] (29 MBps) [2024-10-15T13:57:25.462Z] Copying: 409/1024 [MB] (29 MBps) [2024-10-15T13:57:26.396Z] Copying: 438/1024 [MB] (28 MBps) [2024-10-15T13:57:27.329Z] Copying: 470/1024 [MB] (31 MBps) [2024-10-15T13:57:28.262Z] Copying: 498/1024 [MB] (28 MBps) [2024-10-15T13:57:29.194Z] Copying: 529/1024 [MB] (31 MBps) [2024-10-15T13:57:30.563Z] Copying: 558/1024 [MB] (28 MBps) [2024-10-15T13:57:31.494Z] Copying: 588/1024 [MB] (29 MBps) [2024-10-15T13:57:32.431Z] Copying: 622/1024 [MB] (34 MBps) [2024-10-15T13:57:33.371Z] Copying: 656/1024 [MB] (33 MBps) [2024-10-15T13:57:34.304Z] Copying: 689/1024 [MB] (33 MBps) [2024-10-15T13:57:35.236Z] Copying: 719/1024 [MB] (30 MBps) [2024-10-15T13:57:36.607Z] Copying: 746/1024 [MB] (26 MBps) [2024-10-15T13:57:37.538Z] Copying: 779/1024 [MB] (32 MBps) [2024-10-15T13:57:38.471Z] Copying: 807/1024 [MB] (28 MBps) [2024-10-15T13:57:39.404Z] Copying: 836/1024 [MB] (28 MBps) [2024-10-15T13:57:40.335Z] Copying: 867/1024 [MB] (31 MBps) [2024-10-15T13:57:41.273Z] Copying: 895/1024 [MB] (28 MBps) [2024-10-15T13:57:42.207Z] Copying: 924/1024 [MB] (28 MBps) [2024-10-15T13:57:43.582Z] Copying: 957/1024 [MB] (33 MBps) [2024-10-15T13:57:44.516Z] Copying: 991/1024 [MB] (33 MBps) [2024-10-15T13:57:44.516Z] Copying: 1022/1024 [MB] (31 MBps) [2024-10-15T13:57:45.083Z] Copying: 1024/1024 [MB] (average 30 MBps) 00:23:31.295 00:23:31.295 13:57:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:31.295 13:57:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:31.295 13:57:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:31.555 [2024-10-15 13:57:45.179368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.555 [2024-10-15 13:57:45.179430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:31.555 [2024-10-15 13:57:45.179444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:31.555 [2024-10-15 13:57:45.179453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.555 [2024-10-15 13:57:45.179476] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:31.555 [2024-10-15 13:57:45.182082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.555 [2024-10-15 13:57:45.182276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:31.555 [2024-10-15 13:57:45.182296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.588 ms 00:23:31.555 [2024-10-15 13:57:45.182305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.555 [2024-10-15 13:57:45.183913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.555 [2024-10-15 13:57:45.183939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:31.555 [2024-10-15 13:57:45.183950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:23:31.555 [2024-10-15 13:57:45.183958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.555 [2024-10-15 13:57:45.199790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.555 [2024-10-15 13:57:45.199822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:31.555 [2024-10-15 13:57:45.199835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.811 ms 00:23:31.555 [2024-10-15 13:57:45.199846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.555 [2024-10-15 13:57:45.206028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.555 [2024-10-15 13:57:45.206176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:31.555 [2024-10-15 13:57:45.206195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.148 ms 00:23:31.555 [2024-10-15 13:57:45.206202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.555 [2024-10-15 13:57:45.229343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.555 [2024-10-15 13:57:45.229378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:31.555 [2024-10-15 13:57:45.229391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.014 ms 00:23:31.555 [2024-10-15 13:57:45.229399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.555 [2024-10-15 13:57:45.244066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.555 [2024-10-15 13:57:45.244114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:31.555 [2024-10-15 13:57:45.244129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.623 ms 00:23:31.555 [2024-10-15 13:57:45.244137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.555 [2024-10-15 13:57:45.244314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.555 [2024-10-15 13:57:45.244328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:31.555 [2024-10-15 13:57:45.244339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:23:31.555 [2024-10-15 13:57:45.244346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.555 [2024-10-15 13:57:45.267142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.555 [2024-10-15 13:57:45.267173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:31.555 [2024-10-15 13:57:45.267185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.775 ms 00:23:31.555 [2024-10-15 13:57:45.267193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.555 [2024-10-15 13:57:45.289697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.555 [2024-10-15 13:57:45.289734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:31.555 [2024-10-15 13:57:45.289748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.451 ms 00:23:31.555 [2024-10-15 13:57:45.289756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.555 [2024-10-15 13:57:45.311842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.555 [2024-10-15 13:57:45.311874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:31.555 [2024-10-15 13:57:45.311888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.044 ms 00:23:31.555 [2024-10-15 13:57:45.311896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.555 [2024-10-15 13:57:45.334062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.555 [2024-10-15 13:57:45.334242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:31.555 [2024-10-15 13:57:45.334262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.088 ms 00:23:31.555 [2024-10-15 13:57:45.334270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.555 [2024-10-15 13:57:45.334305] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:31.555 [2024-10-15 13:57:45.334320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:31.555 [2024-10-15 13:57:45.334702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.334997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:31.556 [2024-10-15 13:57:45.335187] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:31.556 [2024-10-15 13:57:45.335197] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 53c34892-29d3-4e41-9c0d-fabe059a629d 00:23:31.556 [2024-10-15 13:57:45.335205] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:31.556 [2024-10-15 13:57:45.335214] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:31.556 [2024-10-15 13:57:45.335230] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:31.556 [2024-10-15 13:57:45.335239] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:31.556 [2024-10-15 13:57:45.335246] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:31.556 [2024-10-15 13:57:45.335254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:31.556 [2024-10-15 13:57:45.335264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:31.556 [2024-10-15 13:57:45.335271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:31.556 [2024-10-15 13:57:45.335278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:31.556 [2024-10-15 13:57:45.335286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.556 [2024-10-15 13:57:45.335293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:31.556 [2024-10-15 13:57:45.335303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:23:31.556 [2024-10-15 13:57:45.335310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.347815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.815 [2024-10-15 13:57:45.347847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:31.815 [2024-10-15 13:57:45.347860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.467 ms 00:23:31.815 [2024-10-15 13:57:45.347870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.348256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.815 [2024-10-15 13:57:45.348267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:31.815 [2024-10-15 13:57:45.348277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:23:31.815 [2024-10-15 13:57:45.348285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.389507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.815 [2024-10-15 13:57:45.389653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:31.815 [2024-10-15 13:57:45.389672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.815 [2024-10-15 13:57:45.389682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.389755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.815 [2024-10-15 13:57:45.389763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:31.815 [2024-10-15 13:57:45.389772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.815 [2024-10-15 13:57:45.389780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.389855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.815 [2024-10-15 13:57:45.389865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:31.815 [2024-10-15 13:57:45.389874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.815 [2024-10-15 13:57:45.389881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.389904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.815 [2024-10-15 13:57:45.389912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:31.815 [2024-10-15 13:57:45.389920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.815 [2024-10-15 13:57:45.389928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.465608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.815 [2024-10-15 13:57:45.465655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:31.815 [2024-10-15 13:57:45.465670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.815 [2024-10-15 13:57:45.465680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.527647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.815 [2024-10-15 13:57:45.527694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:31.815 [2024-10-15 13:57:45.527707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.815 [2024-10-15 13:57:45.527716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.527828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.815 [2024-10-15 13:57:45.527837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:31.815 [2024-10-15 13:57:45.527847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.815 [2024-10-15 13:57:45.527855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.527904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.815 [2024-10-15 13:57:45.527918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:31.815 [2024-10-15 13:57:45.527928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.815 [2024-10-15 13:57:45.527935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.528025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.815 [2024-10-15 13:57:45.528036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:31.815 [2024-10-15 13:57:45.528046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.815 [2024-10-15 13:57:45.528053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.528085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.815 [2024-10-15 13:57:45.528106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:31.815 [2024-10-15 13:57:45.528119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.815 [2024-10-15 13:57:45.528127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.528165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.815 [2024-10-15 13:57:45.528174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:31.815 [2024-10-15 13:57:45.528184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.815 [2024-10-15 13:57:45.528192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.528264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.815 [2024-10-15 13:57:45.528278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:31.815 [2024-10-15 13:57:45.528288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.815 [2024-10-15 13:57:45.528295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.815 [2024-10-15 13:57:45.528430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 349.034 ms, result 0 00:23:31.815 true 00:23:31.815 13:57:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 76706 00:23:31.815 13:57:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76706 00:23:31.816 13:57:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:32.075 [2024-10-15 13:57:45.619142] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:23:32.075 [2024-10-15 13:57:45.619513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77303 ] 00:23:32.075 [2024-10-15 13:57:45.770391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.333 [2024-10-15 13:57:45.871212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.707  [2024-10-15T13:57:48.431Z] Copying: 214/1024 [MB] (214 MBps) [2024-10-15T13:57:49.366Z] Copying: 465/1024 [MB] (251 MBps) [2024-10-15T13:57:50.308Z] Copying: 712/1024 [MB] (246 MBps) [2024-10-15T13:57:50.599Z] Copying: 958/1024 [MB] (245 MBps) [2024-10-15T13:57:51.182Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:23:37.394 00:23:37.394 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76706 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:37.394 13:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:37.394 [2024-10-15 13:57:51.010942] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:23:37.394 [2024-10-15 13:57:51.011063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77361 ] 00:23:37.394 [2024-10-15 13:57:51.159286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.653 [2024-10-15 13:57:51.246004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.911 [2024-10-15 13:57:51.463031] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:37.911 [2024-10-15 13:57:51.463261] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:37.911 [2024-10-15 13:57:51.525834] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:37.911 [2024-10-15 13:57:51.526182] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:37.911 [2024-10-15 13:57:51.526338] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:38.172 [2024-10-15 13:57:51.704237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.172 [2024-10-15 13:57:51.704286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:38.172 [2024-10-15 13:57:51.704297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:38.172 [2024-10-15 13:57:51.704304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.172 [2024-10-15 13:57:51.704347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.172 [2024-10-15 13:57:51.704355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.172 [2024-10-15 13:57:51.704365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:38.172 [2024-10-15 13:57:51.704371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.172 [2024-10-15 13:57:51.704386] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:38.172 [2024-10-15 13:57:51.705016] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:38.172 [2024-10-15 13:57:51.705033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.172 [2024-10-15 13:57:51.705041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.172 [2024-10-15 13:57:51.705053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:23:38.172 [2024-10-15 13:57:51.705060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.172 [2024-10-15 13:57:51.706098] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:38.172 [2024-10-15 13:57:51.716067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.172 [2024-10-15 13:57:51.716115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:38.172 [2024-10-15 13:57:51.716128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.969 ms 00:23:38.172 [2024-10-15 13:57:51.716136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.172 [2024-10-15 13:57:51.716191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.172 [2024-10-15 13:57:51.716199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:38.172 [2024-10-15 13:57:51.716206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:38.172 [2024-10-15 13:57:51.716211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.172 [2024-10-15 13:57:51.721154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.172 [2024-10-15 13:57:51.721189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.172 [2024-10-15 13:57:51.721198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.870 ms 00:23:38.172 [2024-10-15 13:57:51.721205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.172 [2024-10-15 13:57:51.721280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.172 [2024-10-15 13:57:51.721288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.172 [2024-10-15 13:57:51.721294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:23:38.172 [2024-10-15 13:57:51.721301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.172 [2024-10-15 13:57:51.721341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.172 [2024-10-15 13:57:51.721349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:38.172 [2024-10-15 13:57:51.721359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:38.172 [2024-10-15 13:57:51.721365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.172 [2024-10-15 13:57:51.721383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:38.172 [2024-10-15 13:57:51.724384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.172 [2024-10-15 13:57:51.724498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.172 [2024-10-15 13:57:51.724550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.005 ms 00:23:38.172 [2024-10-15 13:57:51.724569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.172 [2024-10-15 13:57:51.724606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.172 [2024-10-15 13:57:51.724669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:38.172 [2024-10-15 13:57:51.724689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:38.172 [2024-10-15 13:57:51.724704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.172 [2024-10-15 13:57:51.724751] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:38.172 [2024-10-15 13:57:51.724783] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:38.172 [2024-10-15 13:57:51.724838] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:38.172 [2024-10-15 13:57:51.724963] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:38.172 [2024-10-15 13:57:51.725067] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:38.172 [2024-10-15 13:57:51.725100] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:38.172 [2024-10-15 13:57:51.725161] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:38.172 [2024-10-15 13:57:51.725187] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:38.172 [2024-10-15 13:57:51.725211] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:38.172 [2024-10-15 13:57:51.725277] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:38.172 [2024-10-15 13:57:51.725321] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:38.172 [2024-10-15 13:57:51.725338] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:38.172 [2024-10-15 13:57:51.725370] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:38.172 [2024-10-15 13:57:51.725388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.172 [2024-10-15 13:57:51.725403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:38.172 [2024-10-15 13:57:51.725418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:23:38.172 [2024-10-15 13:57:51.725433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.172 [2024-10-15 13:57:51.725515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.172 [2024-10-15 13:57:51.725608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:38.172 [2024-10-15 13:57:51.725627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:38.172 [2024-10-15 13:57:51.725646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.172 [2024-10-15 13:57:51.725752] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:38.172 [2024-10-15 13:57:51.725866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:38.172 [2024-10-15 13:57:51.725885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.172 [2024-10-15 13:57:51.725901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.172 [2024-10-15 13:57:51.725916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:38.172 [2024-10-15 13:57:51.725930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:38.172 [2024-10-15 13:57:51.725944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:38.172 [2024-10-15 13:57:51.725959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:38.172 [2024-10-15 13:57:51.725973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:38.172 [2024-10-15 13:57:51.726074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.172 [2024-10-15 13:57:51.726082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:38.172 [2024-10-15 13:57:51.726093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:38.172 [2024-10-15 13:57:51.726099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.172 [2024-10-15 13:57:51.726105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:38.172 [2024-10-15 13:57:51.726111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:38.172 [2024-10-15 13:57:51.726117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.172 [2024-10-15 13:57:51.726122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:38.172 [2024-10-15 13:57:51.726127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:38.172 [2024-10-15 13:57:51.726133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.172 [2024-10-15 13:57:51.726138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:38.172 [2024-10-15 13:57:51.726143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:38.172 [2024-10-15 13:57:51.726148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.172 [2024-10-15 13:57:51.726153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:38.172 [2024-10-15 13:57:51.726159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:38.172 [2024-10-15 13:57:51.726164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.172 [2024-10-15 13:57:51.726170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:38.172 [2024-10-15 13:57:51.726176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:38.172 [2024-10-15 13:57:51.726181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.172 [2024-10-15 13:57:51.726187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:38.172 [2024-10-15 13:57:51.726193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:38.172 [2024-10-15 13:57:51.726198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.172 [2024-10-15 13:57:51.726203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:38.172 [2024-10-15 13:57:51.726208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:38.172 [2024-10-15 13:57:51.726213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.172 [2024-10-15 13:57:51.726233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:38.172 [2024-10-15 13:57:51.726241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:38.172 [2024-10-15 13:57:51.726251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.172 [2024-10-15 13:57:51.726258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:38.172 [2024-10-15 13:57:51.726263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:38.172 [2024-10-15 13:57:51.726269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.172 [2024-10-15 13:57:51.726274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:38.172 [2024-10-15 13:57:51.726279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:38.172 [2024-10-15 13:57:51.726285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.172 [2024-10-15 13:57:51.726290] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:38.172 [2024-10-15 13:57:51.726296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:38.172 [2024-10-15 13:57:51.726302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.172 [2024-10-15 13:57:51.726309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.173 [2024-10-15 13:57:51.726317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:38.173 [2024-10-15 13:57:51.726322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:38.173 [2024-10-15 13:57:51.726328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:38.173 [2024-10-15 13:57:51.726334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:38.173 [2024-10-15 13:57:51.726339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:38.173 [2024-10-15 13:57:51.726344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:38.173 [2024-10-15 13:57:51.726351] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:38.173 [2024-10-15 13:57:51.726358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.173 [2024-10-15 13:57:51.726365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:38.173 [2024-10-15 13:57:51.726370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:38.173 [2024-10-15 13:57:51.726376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:38.173 [2024-10-15 13:57:51.726381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:38.173 [2024-10-15 13:57:51.726388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:38.173 [2024-10-15 13:57:51.726394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:38.173 [2024-10-15 13:57:51.726406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:38.173 [2024-10-15 13:57:51.726412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:38.173 [2024-10-15 13:57:51.726422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:38.173 [2024-10-15 13:57:51.726431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:38.173 [2024-10-15 13:57:51.726441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:38.173 [2024-10-15 13:57:51.726449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:38.173 [2024-10-15 13:57:51.726455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:38.173 [2024-10-15 13:57:51.726461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:38.173 [2024-10-15 13:57:51.726467] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:38.173 [2024-10-15 13:57:51.726474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.173 [2024-10-15 13:57:51.726480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:38.173 [2024-10-15 13:57:51.726486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:38.173 [2024-10-15 13:57:51.726492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:38.173 [2024-10-15 13:57:51.726498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:38.173 [2024-10-15 13:57:51.726504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.726511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:38.173 [2024-10-15 13:57:51.726517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:23:38.173 [2024-10-15 13:57:51.726523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.748841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.749004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.173 [2024-10-15 13:57:51.749053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.267 ms 00:23:38.173 [2024-10-15 13:57:51.749072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.749161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.749178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:38.173 [2024-10-15 13:57:51.749197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:38.173 [2024-10-15 13:57:51.749212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.799413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.799615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.173 [2024-10-15 13:57:51.799669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.108 ms 00:23:38.173 [2024-10-15 13:57:51.799689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.799756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.799776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.173 [2024-10-15 13:57:51.799793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:38.173 [2024-10-15 13:57:51.799809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.800238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.800283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.173 [2024-10-15 13:57:51.800300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:23:38.173 [2024-10-15 13:57:51.800317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.800443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.800476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.173 [2024-10-15 13:57:51.800493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:38.173 [2024-10-15 13:57:51.800508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.811719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.811862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.173 [2024-10-15 13:57:51.811960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.140 ms 00:23:38.173 [2024-10-15 13:57:51.811979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.822198] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:38.173 [2024-10-15 13:57:51.822391] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:38.173 [2024-10-15 13:57:51.822453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.822470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:38.173 [2024-10-15 13:57:51.822487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.334 ms 00:23:38.173 [2024-10-15 13:57:51.822502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.842420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.842587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:38.173 [2024-10-15 13:57:51.842615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.869 ms 00:23:38.173 [2024-10-15 13:57:51.842622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.852013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.852048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:38.173 [2024-10-15 13:57:51.852057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.342 ms 00:23:38.173 [2024-10-15 13:57:51.852063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.861429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.861543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:38.173 [2024-10-15 13:57:51.861585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.324 ms 00:23:38.173 [2024-10-15 13:57:51.861603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.862122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.862154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:38.173 [2024-10-15 13:57:51.862217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:23:38.173 [2024-10-15 13:57:51.862253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.908669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.908849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:38.173 [2024-10-15 13:57:51.908896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.387 ms 00:23:38.173 [2024-10-15 13:57:51.908914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.917747] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:38.173 [2024-10-15 13:57:51.920345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.920443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:38.173 [2024-10-15 13:57:51.920523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.382 ms 00:23:38.173 [2024-10-15 13:57:51.920546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.920656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.920699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:38.173 [2024-10-15 13:57:51.920715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:38.173 [2024-10-15 13:57:51.920992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.921106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.921132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:38.173 [2024-10-15 13:57:51.921207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:38.173 [2024-10-15 13:57:51.921244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.921277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.921295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:38.173 [2024-10-15 13:57:51.921358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:38.173 [2024-10-15 13:57:51.921381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.921414] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:38.173 [2024-10-15 13:57:51.921424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.921430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:38.173 [2024-10-15 13:57:51.921437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:38.173 [2024-10-15 13:57:51.921443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.940865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.940914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:38.173 [2024-10-15 13:57:51.940924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.406 ms 00:23:38.173 [2024-10-15 13:57:51.940931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.941000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.173 [2024-10-15 13:57:51.941009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:38.173 [2024-10-15 13:57:51.941016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:38.173 [2024-10-15 13:57:51.941022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.173 [2024-10-15 13:57:51.941869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 237.282 ms, result 0 00:23:39.547  [2024-10-15T13:57:54.267Z] Copying: 43/1024 [MB] (43 MBps) [2024-10-15T13:57:55.201Z] Copying: 86/1024 [MB] (42 MBps) [2024-10-15T13:57:56.137Z] Copying: 130/1024 [MB] (44 MBps) [2024-10-15T13:57:57.070Z] Copying: 174/1024 [MB] (44 MBps) [2024-10-15T13:57:58.006Z] Copying: 218/1024 [MB] (43 MBps) [2024-10-15T13:57:59.379Z] Copying: 262/1024 [MB] (43 MBps) [2024-10-15T13:58:00.313Z] Copying: 304/1024 [MB] (42 MBps) [2024-10-15T13:58:01.311Z] Copying: 349/1024 [MB] (44 MBps) [2024-10-15T13:58:02.245Z] Copying: 394/1024 [MB] (44 MBps) [2024-10-15T13:58:03.179Z] Copying: 439/1024 [MB] (45 MBps) [2024-10-15T13:58:04.113Z] Copying: 484/1024 [MB] (44 MBps) [2024-10-15T13:58:05.047Z] Copying: 530/1024 [MB] (45 MBps) [2024-10-15T13:58:05.982Z] Copying: 574/1024 [MB] (44 MBps) [2024-10-15T13:58:07.356Z] Copying: 619/1024 [MB] (44 MBps) [2024-10-15T13:58:08.290Z] Copying: 663/1024 [MB] (44 MBps) [2024-10-15T13:58:09.222Z] Copying: 708/1024 [MB] (45 MBps) [2024-10-15T13:58:10.156Z] Copying: 753/1024 [MB] (44 MBps) [2024-10-15T13:58:11.090Z] Copying: 795/1024 [MB] (42 MBps) [2024-10-15T13:58:12.038Z] Copying: 840/1024 [MB] (44 MBps) [2024-10-15T13:58:12.972Z] Copying: 887/1024 [MB] (46 MBps) [2024-10-15T13:58:14.346Z] Copying: 931/1024 [MB] (44 MBps) [2024-10-15T13:58:15.279Z] Copying: 977/1024 [MB] (45 MBps) [2024-10-15T13:58:16.214Z] Copying: 1023/1024 [MB] (45 MBps) [2024-10-15T13:58:16.214Z] Copying: 1048492/1048576 [kB] (928 kBps) [2024-10-15T13:58:16.214Z] Copying: 1024/1024 [MB] (average 42 MBps)[2024-10-15 13:58:16.064781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.426 [2024-10-15 13:58:16.064871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:02.426 [2024-10-15 13:58:16.064888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:02.426 [2024-10-15 13:58:16.064898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.426 [2024-10-15 13:58:16.066925] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:02.426 [2024-10-15 13:58:16.072923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.426 [2024-10-15 13:58:16.072959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:02.426 [2024-10-15 13:58:16.072972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.959 ms 00:24:02.426 [2024-10-15 13:58:16.072981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.426 [2024-10-15 13:58:16.093153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.426 [2024-10-15 13:58:16.093206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:02.426 [2024-10-15 13:58:16.093234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.258 ms 00:24:02.426 [2024-10-15 13:58:16.093243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.426 [2024-10-15 13:58:16.113106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.426 [2024-10-15 13:58:16.113144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:02.426 [2024-10-15 13:58:16.113155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.845 ms 00:24:02.426 [2024-10-15 13:58:16.113163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.426 [2024-10-15 13:58:16.119318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.426 [2024-10-15 13:58:16.119460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:02.426 [2024-10-15 13:58:16.119485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.127 ms 00:24:02.426 [2024-10-15 13:58:16.119493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.426 [2024-10-15 13:58:16.144069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.426 [2024-10-15 13:58:16.144116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:02.426 [2024-10-15 13:58:16.144128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.524 ms 00:24:02.426 [2024-10-15 13:58:16.144135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.426 [2024-10-15 13:58:16.158174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.426 [2024-10-15 13:58:16.158213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:02.426 [2024-10-15 13:58:16.158243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.000 ms 00:24:02.426 [2024-10-15 13:58:16.158252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.685 [2024-10-15 13:58:16.221042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.685 [2024-10-15 13:58:16.221106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:02.685 [2024-10-15 13:58:16.221119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.741 ms 00:24:02.685 [2024-10-15 13:58:16.221127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.685 [2024-10-15 13:58:16.245605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.685 [2024-10-15 13:58:16.245817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:02.685 [2024-10-15 13:58:16.245836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.451 ms 00:24:02.685 [2024-10-15 13:58:16.245846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.685 [2024-10-15 13:58:16.269523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.685 [2024-10-15 13:58:16.269727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:02.685 [2024-10-15 13:58:16.269810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.638 ms 00:24:02.685 [2024-10-15 13:58:16.269833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.685 [2024-10-15 13:58:16.292778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.685 [2024-10-15 13:58:16.292949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:02.685 [2024-10-15 13:58:16.293001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.888 ms 00:24:02.685 [2024-10-15 13:58:16.293023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.685 [2024-10-15 13:58:16.316126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.685 [2024-10-15 13:58:16.316308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:02.685 [2024-10-15 13:58:16.316362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.026 ms 00:24:02.685 [2024-10-15 13:58:16.316384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.685 [2024-10-15 13:58:16.316447] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:02.685 [2024-10-15 13:58:16.316480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128256 / 261120 wr_cnt: 1 state: open 00:24:02.685 [2024-10-15 13:58:16.316512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.316541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.316570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.316598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.316627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.316695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.316724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.316753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.316811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.316843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.316871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.316941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.316972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.317000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.317275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.317390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.317424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.317453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.317481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.317552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.317583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:02.685 [2024-10-15 13:58:16.317611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.317639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.317693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.317793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.317848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.317877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.317906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.317934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.317991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.318998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.319027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.319055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.319083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.319590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.319820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.319942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:02.686 [2024-10-15 13:58:16.320813] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:02.686 [2024-10-15 13:58:16.320832] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 53c34892-29d3-4e41-9c0d-fabe059a629d 00:24:02.686 [2024-10-15 13:58:16.320850] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128256 00:24:02.686 [2024-10-15 13:58:16.320866] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129216 00:24:02.686 [2024-10-15 13:58:16.320908] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128256 00:24:02.686 [2024-10-15 13:58:16.320927] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:24:02.686 [2024-10-15 13:58:16.320943] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:02.686 [2024-10-15 13:58:16.320960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:02.686 [2024-10-15 13:58:16.320975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:02.686 [2024-10-15 13:58:16.320990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:02.686 [2024-10-15 13:58:16.321005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:02.686 [2024-10-15 13:58:16.321024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.686 [2024-10-15 13:58:16.321042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:02.686 [2024-10-15 13:58:16.321060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.576 ms 00:24:02.686 [2024-10-15 13:58:16.321077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.686 [2024-10-15 13:58:16.338008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.686 [2024-10-15 13:58:16.338123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:02.686 [2024-10-15 13:58:16.338174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.873 ms 00:24:02.686 [2024-10-15 13:58:16.338196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.686 [2024-10-15 13:58:16.338637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.686 [2024-10-15 13:58:16.338716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:02.686 [2024-10-15 13:58:16.338769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:24:02.686 [2024-10-15 13:58:16.338825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.687 [2024-10-15 13:58:16.372689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.687 [2024-10-15 13:58:16.372871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.687 [2024-10-15 13:58:16.372918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.687 [2024-10-15 13:58:16.372940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.687 [2024-10-15 13:58:16.373031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.687 [2024-10-15 13:58:16.373053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.687 [2024-10-15 13:58:16.373072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.687 [2024-10-15 13:58:16.373091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.687 [2024-10-15 13:58:16.373179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.687 [2024-10-15 13:58:16.373267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.687 [2024-10-15 13:58:16.373293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.687 [2024-10-15 13:58:16.373313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.687 [2024-10-15 13:58:16.373342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.687 [2024-10-15 13:58:16.373361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.687 [2024-10-15 13:58:16.373380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.687 [2024-10-15 13:58:16.373399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.687 [2024-10-15 13:58:16.455315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.687 [2024-10-15 13:58:16.455523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.687 [2024-10-15 13:58:16.455640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.687 [2024-10-15 13:58:16.455672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.945 [2024-10-15 13:58:16.520483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.945 [2024-10-15 13:58:16.520698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.945 [2024-10-15 13:58:16.520747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.945 [2024-10-15 13:58:16.520769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.945 [2024-10-15 13:58:16.520864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.945 [2024-10-15 13:58:16.520895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:02.945 [2024-10-15 13:58:16.520914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.945 [2024-10-15 13:58:16.520933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.945 [2024-10-15 13:58:16.520977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.945 [2024-10-15 13:58:16.521033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:02.945 [2024-10-15 13:58:16.521055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.945 [2024-10-15 13:58:16.521073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.945 [2024-10-15 13:58:16.521181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.945 [2024-10-15 13:58:16.521205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:02.945 [2024-10-15 13:58:16.521332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.945 [2024-10-15 13:58:16.521355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.945 [2024-10-15 13:58:16.521408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.945 [2024-10-15 13:58:16.521431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:02.945 [2024-10-15 13:58:16.521450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.945 [2024-10-15 13:58:16.521467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.945 [2024-10-15 13:58:16.521553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.945 [2024-10-15 13:58:16.521577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:02.945 [2024-10-15 13:58:16.521601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.945 [2024-10-15 13:58:16.521620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.945 [2024-10-15 13:58:16.521677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.945 [2024-10-15 13:58:16.521734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:02.945 [2024-10-15 13:58:16.521756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.945 [2024-10-15 13:58:16.521775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.945 [2024-10-15 13:58:16.521915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 458.018 ms, result 0 00:24:05.472 00:24:05.472 00:24:05.472 13:58:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:07.373 13:58:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:07.373 [2024-10-15 13:58:21.040594] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:24:07.373 [2024-10-15 13:58:21.040716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77658 ] 00:24:07.633 [2024-10-15 13:58:21.189664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.633 [2024-10-15 13:58:21.306750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.940 [2024-10-15 13:58:21.584558] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.940 [2024-10-15 13:58:21.584630] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:08.200 [2024-10-15 13:58:21.740675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.200 [2024-10-15 13:58:21.740738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:08.200 [2024-10-15 13:58:21.740753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:08.200 [2024-10-15 13:58:21.740767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.200 [2024-10-15 13:58:21.740825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.200 [2024-10-15 13:58:21.740835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:08.200 [2024-10-15 13:58:21.740844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:08.200 [2024-10-15 13:58:21.740854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.200 [2024-10-15 13:58:21.740875] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:08.200 [2024-10-15 13:58:21.741617] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:08.200 [2024-10-15 13:58:21.741638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.200 [2024-10-15 13:58:21.741649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:08.200 [2024-10-15 13:58:21.741657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:24:08.200 [2024-10-15 13:58:21.741665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.200 [2024-10-15 13:58:21.743091] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:08.200 [2024-10-15 13:58:21.755836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.200 [2024-10-15 13:58:21.755874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:08.200 [2024-10-15 13:58:21.755888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.746 ms 00:24:08.200 [2024-10-15 13:58:21.755897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.200 [2024-10-15 13:58:21.755961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.200 [2024-10-15 13:58:21.755972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:08.200 [2024-10-15 13:58:21.755983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:08.200 [2024-10-15 13:58:21.755991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.200 [2024-10-15 13:58:21.762677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.200 [2024-10-15 13:58:21.762711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:08.200 [2024-10-15 13:58:21.762722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.631 ms 00:24:08.200 [2024-10-15 13:58:21.762730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.200 [2024-10-15 13:58:21.762819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.200 [2024-10-15 13:58:21.762829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:08.200 [2024-10-15 13:58:21.762838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:08.200 [2024-10-15 13:58:21.762846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.200 [2024-10-15 13:58:21.762899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.200 [2024-10-15 13:58:21.762909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:08.200 [2024-10-15 13:58:21.762918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:08.200 [2024-10-15 13:58:21.762925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.200 [2024-10-15 13:58:21.762950] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:08.200 [2024-10-15 13:58:21.766454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.200 [2024-10-15 13:58:21.766483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:08.200 [2024-10-15 13:58:21.766493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.511 ms 00:24:08.200 [2024-10-15 13:58:21.766501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.200 [2024-10-15 13:58:21.766533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.200 [2024-10-15 13:58:21.766542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:08.200 [2024-10-15 13:58:21.766551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:08.200 [2024-10-15 13:58:21.766559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.200 [2024-10-15 13:58:21.766579] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:08.200 [2024-10-15 13:58:21.766600] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:08.200 [2024-10-15 13:58:21.766636] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:08.200 [2024-10-15 13:58:21.766654] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:08.200 [2024-10-15 13:58:21.766758] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:08.200 [2024-10-15 13:58:21.766769] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:08.200 [2024-10-15 13:58:21.766780] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:08.200 [2024-10-15 13:58:21.766790] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:08.200 [2024-10-15 13:58:21.766800] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:08.200 [2024-10-15 13:58:21.766808] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:08.200 [2024-10-15 13:58:21.766816] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:08.200 [2024-10-15 13:58:21.766823] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:08.200 [2024-10-15 13:58:21.766830] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:08.200 [2024-10-15 13:58:21.766838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.200 [2024-10-15 13:58:21.766848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:08.200 [2024-10-15 13:58:21.766856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:24:08.200 [2024-10-15 13:58:21.766863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.200 [2024-10-15 13:58:21.766946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.200 [2024-10-15 13:58:21.766953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:08.200 [2024-10-15 13:58:21.766961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:08.200 [2024-10-15 13:58:21.766968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.200 [2024-10-15 13:58:21.767081] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:08.200 [2024-10-15 13:58:21.767091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:08.200 [2024-10-15 13:58:21.767102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.200 [2024-10-15 13:58:21.767110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.200 [2024-10-15 13:58:21.767118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:08.200 [2024-10-15 13:58:21.767125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:08.200 [2024-10-15 13:58:21.767132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:08.200 [2024-10-15 13:58:21.767139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:08.200 [2024-10-15 13:58:21.767146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:08.200 [2024-10-15 13:58:21.767153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.200 [2024-10-15 13:58:21.767160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:08.201 [2024-10-15 13:58:21.767166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:08.201 [2024-10-15 13:58:21.767173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.201 [2024-10-15 13:58:21.767180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:08.201 [2024-10-15 13:58:21.767187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:08.201 [2024-10-15 13:58:21.767201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.201 [2024-10-15 13:58:21.767208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:08.201 [2024-10-15 13:58:21.767215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:08.201 [2024-10-15 13:58:21.767240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.201 [2024-10-15 13:58:21.767247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:08.201 [2024-10-15 13:58:21.767254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:08.201 [2024-10-15 13:58:21.767261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.201 [2024-10-15 13:58:21.767268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:08.201 [2024-10-15 13:58:21.767274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:08.201 [2024-10-15 13:58:21.767281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.201 [2024-10-15 13:58:21.767288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:08.201 [2024-10-15 13:58:21.767294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:08.201 [2024-10-15 13:58:21.767301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.201 [2024-10-15 13:58:21.767308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:08.201 [2024-10-15 13:58:21.767314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:08.201 [2024-10-15 13:58:21.767321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.201 [2024-10-15 13:58:21.767328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:08.201 [2024-10-15 13:58:21.767334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:08.201 [2024-10-15 13:58:21.767341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.201 [2024-10-15 13:58:21.767347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:08.201 [2024-10-15 13:58:21.767354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:08.201 [2024-10-15 13:58:21.767360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.201 [2024-10-15 13:58:21.767367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:08.201 [2024-10-15 13:58:21.767373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:08.201 [2024-10-15 13:58:21.767380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.201 [2024-10-15 13:58:21.767386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:08.201 [2024-10-15 13:58:21.767393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:08.201 [2024-10-15 13:58:21.767400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.201 [2024-10-15 13:58:21.767407] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:08.201 [2024-10-15 13:58:21.767415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:08.201 [2024-10-15 13:58:21.767423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.201 [2024-10-15 13:58:21.767430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.201 [2024-10-15 13:58:21.767439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:08.201 [2024-10-15 13:58:21.767446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:08.201 [2024-10-15 13:58:21.767452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:08.201 [2024-10-15 13:58:21.767459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:08.201 [2024-10-15 13:58:21.767466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:08.201 [2024-10-15 13:58:21.767472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:08.201 [2024-10-15 13:58:21.767481] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:08.201 [2024-10-15 13:58:21.767490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.201 [2024-10-15 13:58:21.767498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:08.201 [2024-10-15 13:58:21.767505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:08.201 [2024-10-15 13:58:21.767512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:08.201 [2024-10-15 13:58:21.767519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:08.201 [2024-10-15 13:58:21.767526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:08.201 [2024-10-15 13:58:21.767533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:08.201 [2024-10-15 13:58:21.767540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:08.201 [2024-10-15 13:58:21.767547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:08.201 [2024-10-15 13:58:21.767554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:08.201 [2024-10-15 13:58:21.767561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:08.201 [2024-10-15 13:58:21.767568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:08.201 [2024-10-15 13:58:21.767575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:08.201 [2024-10-15 13:58:21.767582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:08.201 [2024-10-15 13:58:21.767590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:08.201 [2024-10-15 13:58:21.767597] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:08.201 [2024-10-15 13:58:21.767605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.201 [2024-10-15 13:58:21.767615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:08.201 [2024-10-15 13:58:21.767623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:08.201 [2024-10-15 13:58:21.767630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:08.201 [2024-10-15 13:58:21.767637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:08.201 [2024-10-15 13:58:21.767644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.201 [2024-10-15 13:58:21.767652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:08.201 [2024-10-15 13:58:21.767659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:24:08.201 [2024-10-15 13:58:21.767666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.201 [2024-10-15 13:58:21.796725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.201 [2024-10-15 13:58:21.796882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:08.201 [2024-10-15 13:58:21.796940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.009 ms 00:24:08.201 [2024-10-15 13:58:21.796965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.201 [2024-10-15 13:58:21.797150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.201 [2024-10-15 13:58:21.797231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:08.201 [2024-10-15 13:58:21.797279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:08.201 [2024-10-15 13:58:21.797301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.201 [2024-10-15 13:58:21.847797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.201 [2024-10-15 13:58:21.847987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:08.201 [2024-10-15 13:58:21.848044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.411 ms 00:24:08.201 [2024-10-15 13:58:21.848068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.201 [2024-10-15 13:58:21.848165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.201 [2024-10-15 13:58:21.848191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:08.201 [2024-10-15 13:58:21.848213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:08.201 [2024-10-15 13:58:21.848249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.201 [2024-10-15 13:58:21.848744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.201 [2024-10-15 13:58:21.848834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:08.201 [2024-10-15 13:58:21.848956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:24:08.201 [2024-10-15 13:58:21.848987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.201 [2024-10-15 13:58:21.849151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.201 [2024-10-15 13:58:21.849183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:08.201 [2024-10-15 13:58:21.849250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:24:08.201 [2024-10-15 13:58:21.849275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.201 [2024-10-15 13:58:21.863327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.201 [2024-10-15 13:58:21.863436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:08.201 [2024-10-15 13:58:21.863484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.015 ms 00:24:08.201 [2024-10-15 13:58:21.863511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.201 [2024-10-15 13:58:21.876406] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:08.201 [2024-10-15 13:58:21.876539] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:08.201 [2024-10-15 13:58:21.876598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.201 [2024-10-15 13:58:21.876620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:08.201 [2024-10-15 13:58:21.876641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.960 ms 00:24:08.201 [2024-10-15 13:58:21.876660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.201 [2024-10-15 13:58:21.901326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.202 [2024-10-15 13:58:21.901495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:08.202 [2024-10-15 13:58:21.901554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.616 ms 00:24:08.202 [2024-10-15 13:58:21.901577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.202 [2024-10-15 13:58:21.913704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.202 [2024-10-15 13:58:21.913875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:08.202 [2024-10-15 13:58:21.913932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.841 ms 00:24:08.202 [2024-10-15 13:58:21.913955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.202 [2024-10-15 13:58:21.925391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.202 [2024-10-15 13:58:21.925525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:08.202 [2024-10-15 13:58:21.925576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.389 ms 00:24:08.202 [2024-10-15 13:58:21.925598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.202 [2024-10-15 13:58:21.926263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.202 [2024-10-15 13:58:21.926353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:08.202 [2024-10-15 13:58:21.926405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:24:08.202 [2024-10-15 13:58:21.926427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.202 [2024-10-15 13:58:21.985356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.202 [2024-10-15 13:58:21.985583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:08.202 [2024-10-15 13:58:21.985638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.889 ms 00:24:08.202 [2024-10-15 13:58:21.985668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.460 [2024-10-15 13:58:21.997705] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:08.460 [2024-10-15 13:58:22.001060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.460 [2024-10-15 13:58:22.001181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:08.460 [2024-10-15 13:58:22.001254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.978 ms 00:24:08.460 [2024-10-15 13:58:22.001279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.460 [2024-10-15 13:58:22.001497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.460 [2024-10-15 13:58:22.001531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:08.460 [2024-10-15 13:58:22.001553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:08.460 [2024-10-15 13:58:22.001639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.460 [2024-10-15 13:58:22.003346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.460 [2024-10-15 13:58:22.003451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:08.460 [2024-10-15 13:58:22.003506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:24:08.460 [2024-10-15 13:58:22.003529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.460 [2024-10-15 13:58:22.003576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.460 [2024-10-15 13:58:22.003669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:08.460 [2024-10-15 13:58:22.003694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:08.460 [2024-10-15 13:58:22.003713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.460 [2024-10-15 13:58:22.003761] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:08.460 [2024-10-15 13:58:22.003824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.460 [2024-10-15 13:58:22.003849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:08.460 [2024-10-15 13:58:22.003869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:08.460 [2024-10-15 13:58:22.003888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.460 [2024-10-15 13:58:22.027787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.460 [2024-10-15 13:58:22.027943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:08.460 [2024-10-15 13:58:22.027995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.864 ms 00:24:08.460 [2024-10-15 13:58:22.028018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.460 [2024-10-15 13:58:22.028160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.460 [2024-10-15 13:58:22.028216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:08.460 [2024-10-15 13:58:22.028251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:08.460 [2024-10-15 13:58:22.028271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.461 [2024-10-15 13:58:22.029415] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 288.251 ms, result 0 00:24:09.835  [2024-10-15T13:58:24.557Z] Copying: 1168/1048576 [kB] (1168 kBps) [2024-10-15T13:58:25.491Z] Copying: 5664/1048576 [kB] (4496 kBps) [2024-10-15T13:58:26.424Z] Copying: 52/1024 [MB] (47 MBps) [2024-10-15T13:58:27.356Z] Copying: 104/1024 [MB] (51 MBps) [2024-10-15T13:58:28.289Z] Copying: 155/1024 [MB] (50 MBps) [2024-10-15T13:58:29.221Z] Copying: 209/1024 [MB] (53 MBps) [2024-10-15T13:58:30.595Z] Copying: 260/1024 [MB] (50 MBps) [2024-10-15T13:58:31.528Z] Copying: 310/1024 [MB] (50 MBps) [2024-10-15T13:58:32.460Z] Copying: 357/1024 [MB] (46 MBps) [2024-10-15T13:58:33.392Z] Copying: 401/1024 [MB] (43 MBps) [2024-10-15T13:58:34.325Z] Copying: 446/1024 [MB] (44 MBps) [2024-10-15T13:58:35.261Z] Copying: 491/1024 [MB] (45 MBps) [2024-10-15T13:58:36.632Z] Copying: 537/1024 [MB] (45 MBps) [2024-10-15T13:58:37.565Z] Copying: 579/1024 [MB] (42 MBps) [2024-10-15T13:58:38.509Z] Copying: 628/1024 [MB] (49 MBps) [2024-10-15T13:58:39.443Z] Copying: 677/1024 [MB] (48 MBps) [2024-10-15T13:58:40.377Z] Copying: 725/1024 [MB] (47 MBps) [2024-10-15T13:58:41.311Z] Copying: 774/1024 [MB] (49 MBps) [2024-10-15T13:58:42.244Z] Copying: 824/1024 [MB] (50 MBps) [2024-10-15T13:58:43.219Z] Copying: 871/1024 [MB] (47 MBps) [2024-10-15T13:58:44.619Z] Copying: 925/1024 [MB] (53 MBps) [2024-10-15T13:58:45.184Z] Copying: 975/1024 [MB] (50 MBps) [2024-10-15T13:58:45.751Z] Copying: 1024/1024 [MB] (average 44 MBps)[2024-10-15 13:58:45.461901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.963 [2024-10-15 13:58:45.462039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:31.963 [2024-10-15 13:58:45.462077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:31.963 [2024-10-15 13:58:45.462122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.963 [2024-10-15 13:58:45.462185] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:31.963 [2024-10-15 13:58:45.467606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.963 [2024-10-15 13:58:45.467643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:31.963 [2024-10-15 13:58:45.467655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.354 ms 00:24:31.963 [2024-10-15 13:58:45.467662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.963 [2024-10-15 13:58:45.467899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.963 [2024-10-15 13:58:45.467911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:31.963 [2024-10-15 13:58:45.467919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:24:31.963 [2024-10-15 13:58:45.467930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.963 [2024-10-15 13:58:45.478093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.963 [2024-10-15 13:58:45.478131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:31.963 [2024-10-15 13:58:45.478142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.145 ms 00:24:31.964 [2024-10-15 13:58:45.478150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.964 [2024-10-15 13:58:45.484323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.964 [2024-10-15 13:58:45.484453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:31.964 [2024-10-15 13:58:45.484470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.148 ms 00:24:31.964 [2024-10-15 13:58:45.484479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.964 [2024-10-15 13:58:45.508578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.964 [2024-10-15 13:58:45.508613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:31.964 [2024-10-15 13:58:45.508624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.036 ms 00:24:31.964 [2024-10-15 13:58:45.508632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.964 [2024-10-15 13:58:45.522843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.964 [2024-10-15 13:58:45.522877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:31.964 [2024-10-15 13:58:45.522890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.177 ms 00:24:31.964 [2024-10-15 13:58:45.522898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.964 [2024-10-15 13:58:45.525005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.964 [2024-10-15 13:58:45.525035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:31.964 [2024-10-15 13:58:45.525045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.071 ms 00:24:31.964 [2024-10-15 13:58:45.525053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.964 [2024-10-15 13:58:45.547833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.964 [2024-10-15 13:58:45.547993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:31.964 [2024-10-15 13:58:45.548010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.766 ms 00:24:31.964 [2024-10-15 13:58:45.548017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.964 [2024-10-15 13:58:45.570957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.964 [2024-10-15 13:58:45.571097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:31.964 [2024-10-15 13:58:45.571124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.906 ms 00:24:31.964 [2024-10-15 13:58:45.571132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.964 [2024-10-15 13:58:45.593321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.964 [2024-10-15 13:58:45.593452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:31.964 [2024-10-15 13:58:45.593469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.158 ms 00:24:31.964 [2024-10-15 13:58:45.593476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.964 [2024-10-15 13:58:45.615843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.964 [2024-10-15 13:58:45.615874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:31.964 [2024-10-15 13:58:45.615885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.312 ms 00:24:31.964 [2024-10-15 13:58:45.615893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.964 [2024-10-15 13:58:45.615925] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:31.964 [2024-10-15 13:58:45.615942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:31.964 [2024-10-15 13:58:45.615953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:24:31.964 [2024-10-15 13:58:45.615962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.615970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.615978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.615985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.615993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:31.964 [2024-10-15 13:58:45.616490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:31.965 [2024-10-15 13:58:45.616772] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:31.965 [2024-10-15 13:58:45.616790] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 53c34892-29d3-4e41-9c0d-fabe059a629d 00:24:31.965 [2024-10-15 13:58:45.616798] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:24:31.965 [2024-10-15 13:58:45.616805] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136384 00:24:31.965 [2024-10-15 13:58:45.616811] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134400 00:24:31.965 [2024-10-15 13:58:45.616820] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:24:31.965 [2024-10-15 13:58:45.616827] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:31.965 [2024-10-15 13:58:45.616838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:31.965 [2024-10-15 13:58:45.616845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:31.965 [2024-10-15 13:58:45.616859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:31.965 [2024-10-15 13:58:45.616866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:31.965 [2024-10-15 13:58:45.616873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.965 [2024-10-15 13:58:45.616881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:31.965 [2024-10-15 13:58:45.616889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:24:31.965 [2024-10-15 13:58:45.616897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.965 [2024-10-15 13:58:45.630033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.965 [2024-10-15 13:58:45.630135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:31.965 [2024-10-15 13:58:45.630196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.119 ms 00:24:31.965 [2024-10-15 13:58:45.630234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.965 [2024-10-15 13:58:45.630605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.965 [2024-10-15 13:58:45.630669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:31.965 [2024-10-15 13:58:45.630778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:24:31.965 [2024-10-15 13:58:45.630801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.965 [2024-10-15 13:58:45.664580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.965 [2024-10-15 13:58:45.664740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:31.965 [2024-10-15 13:58:45.664791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.965 [2024-10-15 13:58:45.664820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.965 [2024-10-15 13:58:45.664901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.965 [2024-10-15 13:58:45.664923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:31.965 [2024-10-15 13:58:45.664942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.965 [2024-10-15 13:58:45.664961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.965 [2024-10-15 13:58:45.665051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.965 [2024-10-15 13:58:45.665191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:31.965 [2024-10-15 13:58:45.665215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.965 [2024-10-15 13:58:45.665246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.965 [2024-10-15 13:58:45.665274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.965 [2024-10-15 13:58:45.665294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:31.965 [2024-10-15 13:58:45.665313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.965 [2024-10-15 13:58:45.665365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.965 [2024-10-15 13:58:45.744091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.965 [2024-10-15 13:58:45.744351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:31.965 [2024-10-15 13:58:45.744409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.965 [2024-10-15 13:58:45.744432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.224 [2024-10-15 13:58:45.810276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.224 [2024-10-15 13:58:45.810467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:32.224 [2024-10-15 13:58:45.810516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.224 [2024-10-15 13:58:45.810540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.224 [2024-10-15 13:58:45.810621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.224 [2024-10-15 13:58:45.810645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:32.224 [2024-10-15 13:58:45.810665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.224 [2024-10-15 13:58:45.810690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.224 [2024-10-15 13:58:45.810761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.224 [2024-10-15 13:58:45.810785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:32.224 [2024-10-15 13:58:45.810853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.224 [2024-10-15 13:58:45.810876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.224 [2024-10-15 13:58:45.810988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.224 [2024-10-15 13:58:45.811128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:32.224 [2024-10-15 13:58:45.811180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.224 [2024-10-15 13:58:45.811202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.224 [2024-10-15 13:58:45.811274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.224 [2024-10-15 13:58:45.811337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:32.224 [2024-10-15 13:58:45.811361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.224 [2024-10-15 13:58:45.811380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.224 [2024-10-15 13:58:45.811456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.224 [2024-10-15 13:58:45.811481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:32.224 [2024-10-15 13:58:45.811500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.224 [2024-10-15 13:58:45.811520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.224 [2024-10-15 13:58:45.811623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.224 [2024-10-15 13:58:45.811650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:32.224 [2024-10-15 13:58:45.811670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.224 [2024-10-15 13:58:45.811689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.224 [2024-10-15 13:58:45.811825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 349.930 ms, result 0 00:24:32.791 00:24:32.791 00:24:32.791 13:58:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:35.321 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:35.321 13:58:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:35.321 [2024-10-15 13:58:48.559671] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:24:35.321 [2024-10-15 13:58:48.559981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77945 ] 00:24:35.321 [2024-10-15 13:58:48.710258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.321 [2024-10-15 13:58:48.827687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.321 [2024-10-15 13:58:49.104738] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:35.321 [2024-10-15 13:58:49.105001] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:35.581 [2024-10-15 13:58:49.260490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.581 [2024-10-15 13:58:49.260740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:35.581 [2024-10-15 13:58:49.260762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:35.581 [2024-10-15 13:58:49.260779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.581 [2024-10-15 13:58:49.260844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.581 [2024-10-15 13:58:49.260854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:35.581 [2024-10-15 13:58:49.260863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:35.581 [2024-10-15 13:58:49.260874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.581 [2024-10-15 13:58:49.260895] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:35.581 [2024-10-15 13:58:49.261661] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:35.581 [2024-10-15 13:58:49.261678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.581 [2024-10-15 13:58:49.261689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:35.581 [2024-10-15 13:58:49.261699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:24:35.581 [2024-10-15 13:58:49.261706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.581 [2024-10-15 13:58:49.263458] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:35.581 [2024-10-15 13:58:49.276428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.581 [2024-10-15 13:58:49.276477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:35.581 [2024-10-15 13:58:49.276492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.972 ms 00:24:35.581 [2024-10-15 13:58:49.276502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.581 [2024-10-15 13:58:49.276573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.581 [2024-10-15 13:58:49.276584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:35.581 [2024-10-15 13:58:49.276595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:35.581 [2024-10-15 13:58:49.276603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.581 [2024-10-15 13:58:49.283462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.581 [2024-10-15 13:58:49.283504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:35.581 [2024-10-15 13:58:49.283516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.793 ms 00:24:35.581 [2024-10-15 13:58:49.283524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.581 [2024-10-15 13:58:49.283612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.581 [2024-10-15 13:58:49.283621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:35.581 [2024-10-15 13:58:49.283631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:35.581 [2024-10-15 13:58:49.283639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.581 [2024-10-15 13:58:49.283706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.581 [2024-10-15 13:58:49.283717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:35.581 [2024-10-15 13:58:49.283726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:35.581 [2024-10-15 13:58:49.283734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.581 [2024-10-15 13:58:49.283760] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:35.581 [2024-10-15 13:58:49.287344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.581 [2024-10-15 13:58:49.287375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:35.581 [2024-10-15 13:58:49.287385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.593 ms 00:24:35.581 [2024-10-15 13:58:49.287393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.581 [2024-10-15 13:58:49.287428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.581 [2024-10-15 13:58:49.287438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:35.581 [2024-10-15 13:58:49.287446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:35.581 [2024-10-15 13:58:49.287454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.581 [2024-10-15 13:58:49.287476] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:35.581 [2024-10-15 13:58:49.287497] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:35.581 [2024-10-15 13:58:49.287535] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:35.581 [2024-10-15 13:58:49.287554] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:35.581 [2024-10-15 13:58:49.287663] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:35.581 [2024-10-15 13:58:49.287675] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:35.581 [2024-10-15 13:58:49.287687] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:35.581 [2024-10-15 13:58:49.287698] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:35.581 [2024-10-15 13:58:49.287708] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:35.581 [2024-10-15 13:58:49.287716] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:35.581 [2024-10-15 13:58:49.287724] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:35.581 [2024-10-15 13:58:49.287732] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:35.581 [2024-10-15 13:58:49.287740] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:35.581 [2024-10-15 13:58:49.287747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.581 [2024-10-15 13:58:49.287758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:35.581 [2024-10-15 13:58:49.287766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:24:35.581 [2024-10-15 13:58:49.287774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.581 [2024-10-15 13:58:49.287867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.581 [2024-10-15 13:58:49.287875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:35.581 [2024-10-15 13:58:49.287883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:35.581 [2024-10-15 13:58:49.287890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.581 [2024-10-15 13:58:49.288014] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:35.581 [2024-10-15 13:58:49.288025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:35.581 [2024-10-15 13:58:49.288036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:35.581 [2024-10-15 13:58:49.288044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:35.581 [2024-10-15 13:58:49.288052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:35.581 [2024-10-15 13:58:49.288059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:35.581 [2024-10-15 13:58:49.288068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:35.581 [2024-10-15 13:58:49.288075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:35.581 [2024-10-15 13:58:49.288083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:35.581 [2024-10-15 13:58:49.288089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:35.581 [2024-10-15 13:58:49.288097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:35.581 [2024-10-15 13:58:49.288103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:35.581 [2024-10-15 13:58:49.288110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:35.581 [2024-10-15 13:58:49.288134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:35.581 [2024-10-15 13:58:49.288142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:35.581 [2024-10-15 13:58:49.288155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:35.581 [2024-10-15 13:58:49.288164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:35.581 [2024-10-15 13:58:49.288171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:35.581 [2024-10-15 13:58:49.288178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:35.581 [2024-10-15 13:58:49.288185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:35.581 [2024-10-15 13:58:49.288192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:35.581 [2024-10-15 13:58:49.288199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:35.581 [2024-10-15 13:58:49.288206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:35.581 [2024-10-15 13:58:49.288213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:35.581 [2024-10-15 13:58:49.288237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:35.581 [2024-10-15 13:58:49.288245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:35.582 [2024-10-15 13:58:49.288252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:35.582 [2024-10-15 13:58:49.288259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:35.582 [2024-10-15 13:58:49.288266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:35.582 [2024-10-15 13:58:49.288273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:35.582 [2024-10-15 13:58:49.288280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:35.582 [2024-10-15 13:58:49.288286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:35.582 [2024-10-15 13:58:49.288294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:35.582 [2024-10-15 13:58:49.288300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:35.582 [2024-10-15 13:58:49.288307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:35.582 [2024-10-15 13:58:49.288313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:35.582 [2024-10-15 13:58:49.288321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:35.582 [2024-10-15 13:58:49.288328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:35.582 [2024-10-15 13:58:49.288335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:35.582 [2024-10-15 13:58:49.288341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:35.582 [2024-10-15 13:58:49.288349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:35.582 [2024-10-15 13:58:49.288356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:35.582 [2024-10-15 13:58:49.288363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:35.582 [2024-10-15 13:58:49.288370] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:35.582 [2024-10-15 13:58:49.288378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:35.582 [2024-10-15 13:58:49.288385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:35.582 [2024-10-15 13:58:49.288392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:35.582 [2024-10-15 13:58:49.288400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:35.582 [2024-10-15 13:58:49.288410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:35.582 [2024-10-15 13:58:49.288417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:35.582 [2024-10-15 13:58:49.288424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:35.582 [2024-10-15 13:58:49.288431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:35.582 [2024-10-15 13:58:49.288437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:35.582 [2024-10-15 13:58:49.288446] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:35.582 [2024-10-15 13:58:49.288455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:35.582 [2024-10-15 13:58:49.288464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:35.582 [2024-10-15 13:58:49.288471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:35.582 [2024-10-15 13:58:49.288478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:35.582 [2024-10-15 13:58:49.288485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:35.582 [2024-10-15 13:58:49.288492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:35.582 [2024-10-15 13:58:49.288499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:35.582 [2024-10-15 13:58:49.288506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:35.582 [2024-10-15 13:58:49.288513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:35.582 [2024-10-15 13:58:49.288520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:35.582 [2024-10-15 13:58:49.288527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:35.582 [2024-10-15 13:58:49.288534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:35.582 [2024-10-15 13:58:49.288541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:35.582 [2024-10-15 13:58:49.288548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:35.582 [2024-10-15 13:58:49.288555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:35.582 [2024-10-15 13:58:49.288562] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:35.582 [2024-10-15 13:58:49.288570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:35.582 [2024-10-15 13:58:49.288582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:35.582 [2024-10-15 13:58:49.288589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:35.582 [2024-10-15 13:58:49.288598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:35.582 [2024-10-15 13:58:49.288605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:35.582 [2024-10-15 13:58:49.288613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.582 [2024-10-15 13:58:49.288620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:35.582 [2024-10-15 13:58:49.288628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:24:35.582 [2024-10-15 13:58:49.288635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.582 [2024-10-15 13:58:49.317800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.582 [2024-10-15 13:58:49.318000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:35.582 [2024-10-15 13:58:49.318052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.119 ms 00:24:35.582 [2024-10-15 13:58:49.318075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.582 [2024-10-15 13:58:49.318184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.582 [2024-10-15 13:58:49.318211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:35.582 [2024-10-15 13:58:49.318251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:35.582 [2024-10-15 13:58:49.318312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.582 [2024-10-15 13:58:49.366115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.582 [2024-10-15 13:58:49.366381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:35.582 [2024-10-15 13:58:49.366443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.710 ms 00:24:35.582 [2024-10-15 13:58:49.366467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.582 [2024-10-15 13:58:49.366555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.582 [2024-10-15 13:58:49.366580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:35.582 [2024-10-15 13:58:49.366600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:35.582 [2024-10-15 13:58:49.366619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.582 [2024-10-15 13:58:49.367122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.582 [2024-10-15 13:58:49.367209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:35.841 [2024-10-15 13:58:49.367511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:24:35.841 [2024-10-15 13:58:49.367554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.841 [2024-10-15 13:58:49.367780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.841 [2024-10-15 13:58:49.367855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:35.841 [2024-10-15 13:58:49.367905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:24:35.842 [2024-10-15 13:58:49.367927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.382151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.382293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:35.842 [2024-10-15 13:58:49.382349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.186 ms 00:24:35.842 [2024-10-15 13:58:49.382376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.396000] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:35.842 [2024-10-15 13:58:49.396190] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:35.842 [2024-10-15 13:58:49.396272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.396295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:35.842 [2024-10-15 13:58:49.396318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.378 ms 00:24:35.842 [2024-10-15 13:58:49.396338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.421318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.421492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:35.842 [2024-10-15 13:58:49.421557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.926 ms 00:24:35.842 [2024-10-15 13:58:49.421579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.433463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.433578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:35.842 [2024-10-15 13:58:49.433626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.823 ms 00:24:35.842 [2024-10-15 13:58:49.433647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.444943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.445054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:35.842 [2024-10-15 13:58:49.445139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.252 ms 00:24:35.842 [2024-10-15 13:58:49.445162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.445804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.445893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:35.842 [2024-10-15 13:58:49.445945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:24:35.842 [2024-10-15 13:58:49.445968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.505186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.505416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:35.842 [2024-10-15 13:58:49.505470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.179 ms 00:24:35.842 [2024-10-15 13:58:49.505499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.516586] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:35.842 [2024-10-15 13:58:49.519712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.519746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:35.842 [2024-10-15 13:58:49.519761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.124 ms 00:24:35.842 [2024-10-15 13:58:49.519769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.519901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.519913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:35.842 [2024-10-15 13:58:49.519923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:35.842 [2024-10-15 13:58:49.519931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.520670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.520706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:35.842 [2024-10-15 13:58:49.520717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:24:35.842 [2024-10-15 13:58:49.520725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.520752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.520762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:35.842 [2024-10-15 13:58:49.520770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:35.842 [2024-10-15 13:58:49.520778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.520818] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:35.842 [2024-10-15 13:58:49.520829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.520840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:35.842 [2024-10-15 13:58:49.520848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:35.842 [2024-10-15 13:58:49.520856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.544351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.544389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:35.842 [2024-10-15 13:58:49.544402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.478 ms 00:24:35.842 [2024-10-15 13:58:49.544410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.544487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.842 [2024-10-15 13:58:49.544497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:35.842 [2024-10-15 13:58:49.544507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:35.842 [2024-10-15 13:58:49.544514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.842 [2024-10-15 13:58:49.545974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 285.021 ms, result 0 00:24:37.215  [2024-10-15T13:58:51.935Z] Copying: 50/1024 [MB] (50 MBps) [2024-10-15T13:58:52.868Z] Copying: 96/1024 [MB] (45 MBps) [2024-10-15T13:58:53.801Z] Copying: 143/1024 [MB] (47 MBps) [2024-10-15T13:58:54.735Z] Copying: 189/1024 [MB] (45 MBps) [2024-10-15T13:58:56.106Z] Copying: 235/1024 [MB] (46 MBps) [2024-10-15T13:58:57.038Z] Copying: 281/1024 [MB] (46 MBps) [2024-10-15T13:58:57.971Z] Copying: 327/1024 [MB] (45 MBps) [2024-10-15T13:58:58.959Z] Copying: 372/1024 [MB] (45 MBps) [2024-10-15T13:58:59.892Z] Copying: 419/1024 [MB] (46 MBps) [2024-10-15T13:59:00.825Z] Copying: 468/1024 [MB] (48 MBps) [2024-10-15T13:59:01.757Z] Copying: 515/1024 [MB] (47 MBps) [2024-10-15T13:59:03.129Z] Copying: 562/1024 [MB] (47 MBps) [2024-10-15T13:59:04.063Z] Copying: 610/1024 [MB] (47 MBps) [2024-10-15T13:59:04.996Z] Copying: 657/1024 [MB] (47 MBps) [2024-10-15T13:59:05.936Z] Copying: 705/1024 [MB] (47 MBps) [2024-10-15T13:59:06.870Z] Copying: 751/1024 [MB] (46 MBps) [2024-10-15T13:59:07.804Z] Copying: 797/1024 [MB] (45 MBps) [2024-10-15T13:59:08.738Z] Copying: 843/1024 [MB] (46 MBps) [2024-10-15T13:59:10.113Z] Copying: 891/1024 [MB] (47 MBps) [2024-10-15T13:59:11.047Z] Copying: 939/1024 [MB] (48 MBps) [2024-10-15T13:59:11.613Z] Copying: 985/1024 [MB] (45 MBps) [2024-10-15T13:59:11.872Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-10-15 13:59:11.682158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.084 [2024-10-15 13:59:11.682251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:58.084 [2024-10-15 13:59:11.682272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:58.084 [2024-10-15 13:59:11.682286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.084 [2024-10-15 13:59:11.682318] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:58.084 [2024-10-15 13:59:11.687610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.084 [2024-10-15 13:59:11.687664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:58.084 [2024-10-15 13:59:11.687681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.267 ms 00:24:58.084 [2024-10-15 13:59:11.687694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.084 [2024-10-15 13:59:11.688042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.084 [2024-10-15 13:59:11.688065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:58.084 [2024-10-15 13:59:11.688079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:24:58.084 [2024-10-15 13:59:11.688100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.084 [2024-10-15 13:59:11.693329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.084 [2024-10-15 13:59:11.693351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:58.084 [2024-10-15 13:59:11.693361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.208 ms 00:24:58.084 [2024-10-15 13:59:11.693369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.084 [2024-10-15 13:59:11.699539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.084 [2024-10-15 13:59:11.699576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:58.084 [2024-10-15 13:59:11.699586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.154 ms 00:24:58.084 [2024-10-15 13:59:11.699594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.084 [2024-10-15 13:59:11.723111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.084 [2024-10-15 13:59:11.723149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:58.084 [2024-10-15 13:59:11.723161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.459 ms 00:24:58.084 [2024-10-15 13:59:11.723169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.084 [2024-10-15 13:59:11.736812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.084 [2024-10-15 13:59:11.736993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:58.084 [2024-10-15 13:59:11.737011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.620 ms 00:24:58.084 [2024-10-15 13:59:11.737019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.084 [2024-10-15 13:59:11.739049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.084 [2024-10-15 13:59:11.739081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:58.084 [2024-10-15 13:59:11.739097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.002 ms 00:24:58.084 [2024-10-15 13:59:11.739105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.084 [2024-10-15 13:59:11.762187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.084 [2024-10-15 13:59:11.762230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:58.084 [2024-10-15 13:59:11.762241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.066 ms 00:24:58.084 [2024-10-15 13:59:11.762249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.084 [2024-10-15 13:59:11.784963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.084 [2024-10-15 13:59:11.785004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:58.084 [2024-10-15 13:59:11.785015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.695 ms 00:24:58.084 [2024-10-15 13:59:11.785022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.084 [2024-10-15 13:59:11.806864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.084 [2024-10-15 13:59:11.806897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:58.084 [2024-10-15 13:59:11.806908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.822 ms 00:24:58.084 [2024-10-15 13:59:11.806915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.084 [2024-10-15 13:59:11.828835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.084 [2024-10-15 13:59:11.828866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:58.084 [2024-10-15 13:59:11.828877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.877 ms 00:24:58.084 [2024-10-15 13:59:11.828885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.084 [2024-10-15 13:59:11.828903] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:58.084 [2024-10-15 13:59:11.828917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:58.084 [2024-10-15 13:59:11.828927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:24:58.084 [2024-10-15 13:59:11.828936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.828943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.828951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.828959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.828966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.828973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.828981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.828989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.828997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.829004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.829012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.829021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.829028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.829036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.829045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.829052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.829059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:58.084 [2024-10-15 13:59:11.829067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:58.085 [2024-10-15 13:59:11.829692] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:58.085 [2024-10-15 13:59:11.829705] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 53c34892-29d3-4e41-9c0d-fabe059a629d 00:24:58.085 [2024-10-15 13:59:11.829713] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:24:58.085 [2024-10-15 13:59:11.829722] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:58.085 [2024-10-15 13:59:11.829729] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:58.085 [2024-10-15 13:59:11.829736] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:58.085 [2024-10-15 13:59:11.829743] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:58.085 [2024-10-15 13:59:11.829750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:58.085 [2024-10-15 13:59:11.829764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:58.085 [2024-10-15 13:59:11.829771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:58.085 [2024-10-15 13:59:11.829778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:58.085 [2024-10-15 13:59:11.829785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.085 [2024-10-15 13:59:11.829792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:58.085 [2024-10-15 13:59:11.829800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:24:58.085 [2024-10-15 13:59:11.829808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.086 [2024-10-15 13:59:11.841935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.086 [2024-10-15 13:59:11.841967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:58.086 [2024-10-15 13:59:11.841978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.107 ms 00:24:58.086 [2024-10-15 13:59:11.841987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.086 [2024-10-15 13:59:11.842356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.086 [2024-10-15 13:59:11.842372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:58.086 [2024-10-15 13:59:11.842381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:24:58.086 [2024-10-15 13:59:11.842394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.343 [2024-10-15 13:59:11.874503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.343 [2024-10-15 13:59:11.874546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:58.344 [2024-10-15 13:59:11.874557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.344 [2024-10-15 13:59:11.874565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.344 [2024-10-15 13:59:11.874623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.344 [2024-10-15 13:59:11.874632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:58.344 [2024-10-15 13:59:11.874640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.344 [2024-10-15 13:59:11.874652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.344 [2024-10-15 13:59:11.874714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.344 [2024-10-15 13:59:11.874724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:58.344 [2024-10-15 13:59:11.874731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.344 [2024-10-15 13:59:11.874739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.344 [2024-10-15 13:59:11.874754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.344 [2024-10-15 13:59:11.874761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:58.344 [2024-10-15 13:59:11.874769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.344 [2024-10-15 13:59:11.874775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.344 [2024-10-15 13:59:11.951922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.344 [2024-10-15 13:59:11.951970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:58.344 [2024-10-15 13:59:11.951982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.344 [2024-10-15 13:59:11.951990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.344 [2024-10-15 13:59:12.015797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.344 [2024-10-15 13:59:12.015844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:58.344 [2024-10-15 13:59:12.015856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.344 [2024-10-15 13:59:12.015864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.344 [2024-10-15 13:59:12.015934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.344 [2024-10-15 13:59:12.015944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:58.344 [2024-10-15 13:59:12.015952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.344 [2024-10-15 13:59:12.015960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.344 [2024-10-15 13:59:12.015993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.344 [2024-10-15 13:59:12.016001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:58.344 [2024-10-15 13:59:12.016009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.344 [2024-10-15 13:59:12.016016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.344 [2024-10-15 13:59:12.016118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.344 [2024-10-15 13:59:12.016130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:58.344 [2024-10-15 13:59:12.016138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.344 [2024-10-15 13:59:12.016146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.344 [2024-10-15 13:59:12.016177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.344 [2024-10-15 13:59:12.016187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:58.344 [2024-10-15 13:59:12.016195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.344 [2024-10-15 13:59:12.016202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.344 [2024-10-15 13:59:12.016258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.344 [2024-10-15 13:59:12.016272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:58.344 [2024-10-15 13:59:12.016280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.344 [2024-10-15 13:59:12.016288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.344 [2024-10-15 13:59:12.016326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.344 [2024-10-15 13:59:12.016337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:58.344 [2024-10-15 13:59:12.016345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.344 [2024-10-15 13:59:12.016352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.344 [2024-10-15 13:59:12.016456] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 334.286 ms, result 0 00:24:58.909 00:24:58.909 00:24:58.909 13:59:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:01.437 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:01.437 13:59:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:01.437 13:59:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:01.437 13:59:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:01.437 13:59:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:01.437 13:59:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:01.438 13:59:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:01.438 13:59:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:01.438 13:59:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 76706 00:25:01.438 13:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 76706 ']' 00:25:01.438 13:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 76706 00:25:01.438 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76706) - No such process 00:25:01.438 Process with pid 76706 is not found 00:25:01.438 13:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 76706 is not found' 00:25:01.438 13:59:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:01.695 13:59:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:01.695 Remove shared memory files 00:25:01.695 13:59:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:01.695 13:59:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:01.695 13:59:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:01.695 13:59:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:01.695 13:59:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:01.696 13:59:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:01.696 ************************************ 00:25:01.696 END TEST ftl_dirty_shutdown 00:25:01.696 ************************************ 00:25:01.696 00:25:01.696 real 2m21.031s 00:25:01.696 user 2m37.624s 00:25:01.696 sys 0m23.739s 00:25:01.696 13:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:01.696 13:59:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:01.696 13:59:15 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:01.696 13:59:15 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:01.696 13:59:15 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:01.696 13:59:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:01.696 ************************************ 00:25:01.696 START TEST ftl_upgrade_shutdown 00:25:01.696 ************************************ 00:25:01.696 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:01.954 * Looking for test storage... 00:25:01.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:01.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.954 --rc genhtml_branch_coverage=1 00:25:01.954 --rc genhtml_function_coverage=1 00:25:01.954 --rc genhtml_legend=1 00:25:01.954 --rc geninfo_all_blocks=1 00:25:01.954 --rc geninfo_unexecuted_blocks=1 00:25:01.954 00:25:01.954 ' 00:25:01.954 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:01.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.954 --rc genhtml_branch_coverage=1 00:25:01.955 --rc genhtml_function_coverage=1 00:25:01.955 --rc genhtml_legend=1 00:25:01.955 --rc geninfo_all_blocks=1 00:25:01.955 --rc geninfo_unexecuted_blocks=1 00:25:01.955 00:25:01.955 ' 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:01.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.955 --rc genhtml_branch_coverage=1 00:25:01.955 --rc genhtml_function_coverage=1 00:25:01.955 --rc genhtml_legend=1 00:25:01.955 --rc geninfo_all_blocks=1 00:25:01.955 --rc geninfo_unexecuted_blocks=1 00:25:01.955 00:25:01.955 ' 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:01.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.955 --rc genhtml_branch_coverage=1 00:25:01.955 --rc genhtml_function_coverage=1 00:25:01.955 --rc genhtml_legend=1 00:25:01.955 --rc geninfo_all_blocks=1 00:25:01.955 --rc geninfo_unexecuted_blocks=1 00:25:01.955 00:25:01.955 ' 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78296 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78296 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78296 ']' 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:01.955 13:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:01.955 [2024-10-15 13:59:15.665186] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:25:01.955 [2024-10-15 13:59:15.665682] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78296 ] 00:25:02.212 [2024-10-15 13:59:15.815938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.212 [2024-10-15 13:59:15.916427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:02.777 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:03.036 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:03.036 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:03.036 13:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:03.036 13:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:25:03.036 13:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:03.036 13:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:03.036 13:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:03.036 13:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:03.294 13:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:03.294 { 00:25:03.294 "name": "basen1", 00:25:03.294 "aliases": [ 00:25:03.294 "1fc3f3ff-2be2-4e1d-a665-e36b21fc7d83" 00:25:03.294 ], 00:25:03.294 "product_name": "NVMe disk", 00:25:03.294 "block_size": 4096, 00:25:03.294 "num_blocks": 1310720, 00:25:03.294 "uuid": "1fc3f3ff-2be2-4e1d-a665-e36b21fc7d83", 00:25:03.294 "numa_id": -1, 00:25:03.294 "assigned_rate_limits": { 00:25:03.294 "rw_ios_per_sec": 0, 00:25:03.294 "rw_mbytes_per_sec": 0, 00:25:03.294 "r_mbytes_per_sec": 0, 00:25:03.294 "w_mbytes_per_sec": 0 00:25:03.294 }, 00:25:03.294 "claimed": true, 00:25:03.294 "claim_type": "read_many_write_one", 00:25:03.294 "zoned": false, 00:25:03.294 "supported_io_types": { 00:25:03.294 "read": true, 00:25:03.294 "write": true, 00:25:03.294 "unmap": true, 00:25:03.294 "flush": true, 00:25:03.294 "reset": true, 00:25:03.294 "nvme_admin": true, 00:25:03.294 "nvme_io": true, 00:25:03.294 "nvme_io_md": false, 00:25:03.294 "write_zeroes": true, 00:25:03.294 "zcopy": false, 00:25:03.294 "get_zone_info": false, 00:25:03.294 "zone_management": false, 00:25:03.294 "zone_append": false, 00:25:03.294 "compare": true, 00:25:03.294 "compare_and_write": false, 00:25:03.294 "abort": true, 00:25:03.294 "seek_hole": false, 00:25:03.294 "seek_data": false, 00:25:03.294 "copy": true, 00:25:03.294 "nvme_iov_md": false 00:25:03.294 }, 00:25:03.294 "driver_specific": { 00:25:03.294 "nvme": [ 00:25:03.294 { 00:25:03.294 "pci_address": "0000:00:11.0", 00:25:03.294 "trid": { 00:25:03.294 "trtype": "PCIe", 00:25:03.294 "traddr": "0000:00:11.0" 00:25:03.294 }, 00:25:03.294 "ctrlr_data": { 00:25:03.294 "cntlid": 0, 00:25:03.294 "vendor_id": "0x1b36", 00:25:03.294 "model_number": "QEMU NVMe Ctrl", 00:25:03.294 "serial_number": "12341", 00:25:03.294 "firmware_revision": "8.0.0", 00:25:03.294 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:03.294 "oacs": { 00:25:03.294 "security": 0, 00:25:03.294 "format": 1, 00:25:03.294 "firmware": 0, 00:25:03.294 "ns_manage": 1 00:25:03.294 }, 00:25:03.294 "multi_ctrlr": false, 00:25:03.294 "ana_reporting": false 00:25:03.294 }, 00:25:03.294 "vs": { 00:25:03.294 "nvme_version": "1.4" 00:25:03.294 }, 00:25:03.294 "ns_data": { 00:25:03.294 "id": 1, 00:25:03.294 "can_share": false 00:25:03.294 } 00:25:03.294 } 00:25:03.294 ], 00:25:03.294 "mp_policy": "active_passive" 00:25:03.294 } 00:25:03.294 } 00:25:03.294 ]' 00:25:03.294 13:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:03.294 13:59:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:03.294 13:59:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:03.295 13:59:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:03.295 13:59:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:03.295 13:59:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:25:03.295 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:03.295 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:03.295 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:03.295 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:03.295 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:03.553 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=5c4a237e-3880-49ee-b038-5526a1d2f272 00:25:03.553 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:03.553 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c4a237e-3880-49ee-b038-5526a1d2f272 00:25:03.812 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:04.070 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=1665019c-a740-4c79-a073-d4a025337feb 00:25:04.070 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 1665019c-a740-4c79-a073-d4a025337feb 00:25:04.328 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=34213dad-1195-44e4-8352-120d73fa5725 00:25:04.328 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 34213dad-1195-44e4-8352-120d73fa5725 ]] 00:25:04.328 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 34213dad-1195-44e4-8352-120d73fa5725 5120 00:25:04.328 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:04.328 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:04.328 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=34213dad-1195-44e4-8352-120d73fa5725 00:25:04.328 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:04.328 13:59:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 34213dad-1195-44e4-8352-120d73fa5725 00:25:04.328 13:59:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=34213dad-1195-44e4-8352-120d73fa5725 00:25:04.328 13:59:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:04.328 13:59:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:04.328 13:59:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:04.328 13:59:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 34213dad-1195-44e4-8352-120d73fa5725 00:25:04.328 13:59:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:04.328 { 00:25:04.328 "name": "34213dad-1195-44e4-8352-120d73fa5725", 00:25:04.328 "aliases": [ 00:25:04.328 "lvs/basen1p0" 00:25:04.328 ], 00:25:04.328 "product_name": "Logical Volume", 00:25:04.328 "block_size": 4096, 00:25:04.328 "num_blocks": 5242880, 00:25:04.328 "uuid": "34213dad-1195-44e4-8352-120d73fa5725", 00:25:04.328 "assigned_rate_limits": { 00:25:04.328 "rw_ios_per_sec": 0, 00:25:04.328 "rw_mbytes_per_sec": 0, 00:25:04.328 "r_mbytes_per_sec": 0, 00:25:04.328 "w_mbytes_per_sec": 0 00:25:04.328 }, 00:25:04.328 "claimed": false, 00:25:04.328 "zoned": false, 00:25:04.328 "supported_io_types": { 00:25:04.328 "read": true, 00:25:04.328 "write": true, 00:25:04.328 "unmap": true, 00:25:04.328 "flush": false, 00:25:04.328 "reset": true, 00:25:04.328 "nvme_admin": false, 00:25:04.328 "nvme_io": false, 00:25:04.328 "nvme_io_md": false, 00:25:04.328 "write_zeroes": true, 00:25:04.328 "zcopy": false, 00:25:04.328 "get_zone_info": false, 00:25:04.328 "zone_management": false, 00:25:04.328 "zone_append": false, 00:25:04.328 "compare": false, 00:25:04.328 "compare_and_write": false, 00:25:04.328 "abort": false, 00:25:04.328 "seek_hole": true, 00:25:04.328 "seek_data": true, 00:25:04.328 "copy": false, 00:25:04.328 "nvme_iov_md": false 00:25:04.328 }, 00:25:04.328 "driver_specific": { 00:25:04.328 "lvol": { 00:25:04.328 "lvol_store_uuid": "1665019c-a740-4c79-a073-d4a025337feb", 00:25:04.328 "base_bdev": "basen1", 00:25:04.328 "thin_provision": true, 00:25:04.328 "num_allocated_clusters": 0, 00:25:04.328 "snapshot": false, 00:25:04.328 "clone": false, 00:25:04.328 "esnap_clone": false 00:25:04.328 } 00:25:04.328 } 00:25:04.328 } 00:25:04.328 ]' 00:25:04.328 13:59:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:04.587 13:59:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:04.587 13:59:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:04.587 13:59:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:25:04.587 13:59:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:25:04.587 13:59:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:25:04.587 13:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:04.587 13:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:04.587 13:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:04.846 13:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:04.846 13:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:04.846 13:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:05.105 13:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:05.105 13:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:05.105 13:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 34213dad-1195-44e4-8352-120d73fa5725 -c cachen1p0 --l2p_dram_limit 2 00:25:05.105 [2024-10-15 13:59:18.859533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:05.105 [2024-10-15 13:59:18.859582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:05.105 [2024-10-15 13:59:18.859594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:05.105 [2024-10-15 13:59:18.859601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:05.105 [2024-10-15 13:59:18.859648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:05.105 [2024-10-15 13:59:18.859658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:05.105 [2024-10-15 13:59:18.859666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:25:05.105 [2024-10-15 13:59:18.859672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:05.105 [2024-10-15 13:59:18.859689] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:05.105 [2024-10-15 13:59:18.860319] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:05.105 [2024-10-15 13:59:18.860337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:05.105 [2024-10-15 13:59:18.860343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:05.105 [2024-10-15 13:59:18.860351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.650 ms 00:25:05.105 [2024-10-15 13:59:18.860357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:05.105 [2024-10-15 13:59:18.860386] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID fc31afad-a5b8-407a-8be2-9affb9279993 00:25:05.105 [2024-10-15 13:59:18.861382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:05.105 [2024-10-15 13:59:18.861416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:05.105 [2024-10-15 13:59:18.861424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:25:05.105 [2024-10-15 13:59:18.861431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:05.105 [2024-10-15 13:59:18.866204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:05.105 [2024-10-15 13:59:18.866240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:05.106 [2024-10-15 13:59:18.866248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.715 ms 00:25:05.106 [2024-10-15 13:59:18.866256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:05.106 [2024-10-15 13:59:18.866299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:05.106 [2024-10-15 13:59:18.866310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:05.106 [2024-10-15 13:59:18.866317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:25:05.106 [2024-10-15 13:59:18.866327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:05.106 [2024-10-15 13:59:18.866369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:05.106 [2024-10-15 13:59:18.866379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:05.106 [2024-10-15 13:59:18.866385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:05.106 [2024-10-15 13:59:18.866393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:05.106 [2024-10-15 13:59:18.866410] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:05.106 [2024-10-15 13:59:18.869291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:05.106 [2024-10-15 13:59:18.869442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:05.106 [2024-10-15 13:59:18.869457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.885 ms 00:25:05.106 [2024-10-15 13:59:18.869467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:05.106 [2024-10-15 13:59:18.869491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:05.106 [2024-10-15 13:59:18.869497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:05.106 [2024-10-15 13:59:18.869505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:05.106 [2024-10-15 13:59:18.869511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:05.106 [2024-10-15 13:59:18.869526] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:05.106 [2024-10-15 13:59:18.869632] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:05.106 [2024-10-15 13:59:18.869643] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:05.106 [2024-10-15 13:59:18.869652] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:05.106 [2024-10-15 13:59:18.869661] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:05.106 [2024-10-15 13:59:18.869668] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:05.106 [2024-10-15 13:59:18.869676] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:05.106 [2024-10-15 13:59:18.869681] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:05.106 [2024-10-15 13:59:18.869688] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:05.106 [2024-10-15 13:59:18.869694] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:05.106 [2024-10-15 13:59:18.869701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:05.106 [2024-10-15 13:59:18.869708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:05.106 [2024-10-15 13:59:18.869714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.176 ms 00:25:05.106 [2024-10-15 13:59:18.869720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:05.106 [2024-10-15 13:59:18.869785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:05.106 [2024-10-15 13:59:18.869791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:05.106 [2024-10-15 13:59:18.869800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:25:05.106 [2024-10-15 13:59:18.869810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:05.106 [2024-10-15 13:59:18.869885] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:05.106 [2024-10-15 13:59:18.869892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:05.106 [2024-10-15 13:59:18.869901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:05.106 [2024-10-15 13:59:18.869907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:05.106 [2024-10-15 13:59:18.869914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:05.106 [2024-10-15 13:59:18.869919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:05.106 [2024-10-15 13:59:18.869925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:05.106 [2024-10-15 13:59:18.869930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:05.106 [2024-10-15 13:59:18.869936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:05.106 [2024-10-15 13:59:18.869941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:05.106 [2024-10-15 13:59:18.869947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:05.106 [2024-10-15 13:59:18.869952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:05.106 [2024-10-15 13:59:18.869959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:05.106 [2024-10-15 13:59:18.869964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:05.106 [2024-10-15 13:59:18.869970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:05.106 [2024-10-15 13:59:18.869975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:05.106 [2024-10-15 13:59:18.869983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:05.106 [2024-10-15 13:59:18.869988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:05.106 [2024-10-15 13:59:18.869994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:05.106 [2024-10-15 13:59:18.869999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:05.106 [2024-10-15 13:59:18.870007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:05.106 [2024-10-15 13:59:18.870012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:05.106 [2024-10-15 13:59:18.870018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:05.106 [2024-10-15 13:59:18.870023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:05.106 [2024-10-15 13:59:18.870029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:05.106 [2024-10-15 13:59:18.870034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:05.106 [2024-10-15 13:59:18.870040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:05.106 [2024-10-15 13:59:18.870044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:05.106 [2024-10-15 13:59:18.870051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:05.106 [2024-10-15 13:59:18.870055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:05.106 [2024-10-15 13:59:18.870062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:05.106 [2024-10-15 13:59:18.870067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:05.106 [2024-10-15 13:59:18.870074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:05.106 [2024-10-15 13:59:18.870081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:05.106 [2024-10-15 13:59:18.870088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:05.106 [2024-10-15 13:59:18.870093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:05.106 [2024-10-15 13:59:18.870099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:05.106 [2024-10-15 13:59:18.870104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:05.106 [2024-10-15 13:59:18.870110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:05.106 [2024-10-15 13:59:18.870115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:05.106 [2024-10-15 13:59:18.870121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:05.106 [2024-10-15 13:59:18.870126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:05.106 [2024-10-15 13:59:18.870132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:05.106 [2024-10-15 13:59:18.870137] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:05.106 [2024-10-15 13:59:18.870144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:05.106 [2024-10-15 13:59:18.870149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:05.106 [2024-10-15 13:59:18.870156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:05.106 [2024-10-15 13:59:18.870162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:05.106 [2024-10-15 13:59:18.870171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:05.106 [2024-10-15 13:59:18.870176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:05.106 [2024-10-15 13:59:18.870183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:05.106 [2024-10-15 13:59:18.870187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:05.106 [2024-10-15 13:59:18.870194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:05.106 [2024-10-15 13:59:18.870201] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:05.106 [2024-10-15 13:59:18.870210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:05.106 [2024-10-15 13:59:18.870216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:05.106 [2024-10-15 13:59:18.870239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:05.106 [2024-10-15 13:59:18.870244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:05.106 [2024-10-15 13:59:18.870251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:05.106 [2024-10-15 13:59:18.870257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:05.106 [2024-10-15 13:59:18.870263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:05.106 [2024-10-15 13:59:18.870269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:05.107 [2024-10-15 13:59:18.870275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:05.107 [2024-10-15 13:59:18.870280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:05.107 [2024-10-15 13:59:18.870288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:05.107 [2024-10-15 13:59:18.870301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:05.107 [2024-10-15 13:59:18.870308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:05.107 [2024-10-15 13:59:18.870314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:05.107 [2024-10-15 13:59:18.870321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:05.107 [2024-10-15 13:59:18.870327] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:05.107 [2024-10-15 13:59:18.870334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:05.107 [2024-10-15 13:59:18.870343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:05.107 [2024-10-15 13:59:18.870350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:05.107 [2024-10-15 13:59:18.870355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:05.107 [2024-10-15 13:59:18.870362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:05.107 [2024-10-15 13:59:18.870368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:05.107 [2024-10-15 13:59:18.870378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:05.107 [2024-10-15 13:59:18.870384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:25:05.107 [2024-10-15 13:59:18.870391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:05.107 [2024-10-15 13:59:18.870433] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:05.107 [2024-10-15 13:59:18.870444] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:07.639 [2024-10-15 13:59:21.378241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.639 [2024-10-15 13:59:21.378307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:07.639 [2024-10-15 13:59:21.378322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2507.798 ms 00:25:07.639 [2024-10-15 13:59:21.378333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.639 [2024-10-15 13:59:21.403693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.639 [2024-10-15 13:59:21.403744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:07.639 [2024-10-15 13:59:21.403756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.152 ms 00:25:07.639 [2024-10-15 13:59:21.403766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.639 [2024-10-15 13:59:21.403851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.639 [2024-10-15 13:59:21.403864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:07.639 [2024-10-15 13:59:21.403872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:07.639 [2024-10-15 13:59:21.403884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.897 [2024-10-15 13:59:21.434578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.897 [2024-10-15 13:59:21.434625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:07.897 [2024-10-15 13:59:21.434636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.657 ms 00:25:07.897 [2024-10-15 13:59:21.434646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.897 [2024-10-15 13:59:21.434681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.897 [2024-10-15 13:59:21.434692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:07.897 [2024-10-15 13:59:21.434701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:07.897 [2024-10-15 13:59:21.434711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.897 [2024-10-15 13:59:21.435067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.897 [2024-10-15 13:59:21.435085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:07.897 [2024-10-15 13:59:21.435094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.300 ms 00:25:07.897 [2024-10-15 13:59:21.435103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.897 [2024-10-15 13:59:21.435151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.897 [2024-10-15 13:59:21.435161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:07.897 [2024-10-15 13:59:21.435168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:25:07.897 [2024-10-15 13:59:21.435179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.897 [2024-10-15 13:59:21.449204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.897 [2024-10-15 13:59:21.449256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:07.897 [2024-10-15 13:59:21.449267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.004 ms 00:25:07.897 [2024-10-15 13:59:21.449278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.897 [2024-10-15 13:59:21.461026] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:07.898 [2024-10-15 13:59:21.462030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.898 [2024-10-15 13:59:21.462060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:07.898 [2024-10-15 13:59:21.462073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.666 ms 00:25:07.898 [2024-10-15 13:59:21.462080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.898 [2024-10-15 13:59:21.507724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.898 [2024-10-15 13:59:21.507788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:07.898 [2024-10-15 13:59:21.507805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.604 ms 00:25:07.898 [2024-10-15 13:59:21.507814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.898 [2024-10-15 13:59:21.507908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.898 [2024-10-15 13:59:21.507918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:07.898 [2024-10-15 13:59:21.507931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:25:07.898 [2024-10-15 13:59:21.507943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.898 [2024-10-15 13:59:21.530942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.898 [2024-10-15 13:59:21.530990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:07.898 [2024-10-15 13:59:21.531005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.956 ms 00:25:07.898 [2024-10-15 13:59:21.531013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.898 [2024-10-15 13:59:21.553122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.898 [2024-10-15 13:59:21.553166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:07.898 [2024-10-15 13:59:21.553181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.060 ms 00:25:07.898 [2024-10-15 13:59:21.553189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.898 [2024-10-15 13:59:21.553789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.898 [2024-10-15 13:59:21.553807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:07.898 [2024-10-15 13:59:21.553818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:25:07.898 [2024-10-15 13:59:21.553825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.898 [2024-10-15 13:59:21.624321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.898 [2024-10-15 13:59:21.624555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:07.898 [2024-10-15 13:59:21.624583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 70.456 ms 00:25:07.898 [2024-10-15 13:59:21.624592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.898 [2024-10-15 13:59:21.648978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.898 [2024-10-15 13:59:21.649030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:07.898 [2024-10-15 13:59:21.649053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.304 ms 00:25:07.898 [2024-10-15 13:59:21.649061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.898 [2024-10-15 13:59:21.673229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.898 [2024-10-15 13:59:21.673273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:07.898 [2024-10-15 13:59:21.673286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.111 ms 00:25:07.898 [2024-10-15 13:59:21.673294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.156 [2024-10-15 13:59:21.696318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:08.157 [2024-10-15 13:59:21.696377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:08.157 [2024-10-15 13:59:21.696392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.977 ms 00:25:08.157 [2024-10-15 13:59:21.696402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.157 [2024-10-15 13:59:21.696450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:08.157 [2024-10-15 13:59:21.696459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:08.157 [2024-10-15 13:59:21.696471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:08.157 [2024-10-15 13:59:21.696479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.157 [2024-10-15 13:59:21.696560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:08.157 [2024-10-15 13:59:21.696569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:08.157 [2024-10-15 13:59:21.696579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:25:08.157 [2024-10-15 13:59:21.696586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:08.157 [2024-10-15 13:59:21.697452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2837.473 ms, result 0 00:25:08.157 { 00:25:08.157 "name": "ftl", 00:25:08.157 "uuid": "fc31afad-a5b8-407a-8be2-9affb9279993" 00:25:08.157 } 00:25:08.157 13:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:08.157 [2024-10-15 13:59:21.900841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.157 13:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:08.415 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:08.674 [2024-10-15 13:59:22.309262] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:08.674 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:08.932 [2024-10-15 13:59:22.513610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:08.932 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:09.191 Fill FTL, iteration 1 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=78407 00:25:09.191 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:09.192 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:09.192 13:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 78407 /var/tmp/spdk.tgt.sock 00:25:09.192 13:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78407 ']' 00:25:09.192 13:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:09.192 13:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:09.192 13:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:09.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:09.192 13:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:09.192 13:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:09.192 [2024-10-15 13:59:22.943777] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:25:09.192 [2024-10-15 13:59:22.944112] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78407 ] 00:25:09.451 [2024-10-15 13:59:23.090865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.451 [2024-10-15 13:59:23.191309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.019 13:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:10.019 13:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:10.019 13:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:10.286 ftln1 00:25:10.286 13:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:10.286 13:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:10.545 13:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:10.545 13:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 78407 00:25:10.545 13:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78407 ']' 00:25:10.545 13:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 78407 00:25:10.545 13:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:25:10.545 13:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:10.545 13:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78407 00:25:10.545 killing process with pid 78407 00:25:10.545 13:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:10.545 13:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:10.545 13:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78407' 00:25:10.545 13:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 78407 00:25:10.545 13:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 78407 00:25:11.920 13:59:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:11.920 13:59:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:11.920 [2024-10-15 13:59:25.642180] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:25:11.920 [2024-10-15 13:59:25.642285] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78454 ] 00:25:12.178 [2024-10-15 13:59:25.783228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.178 [2024-10-15 13:59:25.866423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.553  [2024-10-15T13:59:28.274Z] Copying: 261/1024 [MB] (261 MBps) [2024-10-15T13:59:29.207Z] Copying: 510/1024 [MB] (249 MBps) [2024-10-15T13:59:30.580Z] Copying: 749/1024 [MB] (239 MBps) [2024-10-15T13:59:30.580Z] Copying: 1012/1024 [MB] (263 MBps) [2024-10-15T13:59:30.838Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:25:17.050 00:25:17.050 13:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:17.050 13:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:17.050 Calculate MD5 checksum, iteration 1 00:25:17.050 13:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:17.051 13:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:17.051 13:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:17.051 13:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:17.051 13:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:17.051 13:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:17.309 [2024-10-15 13:59:30.884148] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:25:17.309 [2024-10-15 13:59:30.884286] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78507 ] 00:25:17.309 [2024-10-15 13:59:31.034809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.567 [2024-10-15 13:59:31.118859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.941  [2024-10-15T13:59:33.294Z] Copying: 675/1024 [MB] (675 MBps) [2024-10-15T13:59:33.860Z] Copying: 1024/1024 [MB] (average 645 MBps) 00:25:20.072 00:25:20.072 13:59:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:20.072 13:59:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:21.974 13:59:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:21.974 Fill FTL, iteration 2 00:25:21.974 13:59:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=dfa2aee09e5d74b663e6e69eecf50034 00:25:21.974 13:59:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:21.974 13:59:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:21.974 13:59:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:21.974 13:59:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:21.974 13:59:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:21.974 13:59:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:21.974 13:59:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:21.974 13:59:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:21.974 13:59:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:22.233 [2024-10-15 13:59:35.818071] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:25:22.233 [2024-10-15 13:59:35.818342] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78566 ] 00:25:22.233 [2024-10-15 13:59:35.968540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.492 [2024-10-15 13:59:36.068692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.867  [2024-10-15T13:59:38.589Z] Copying: 213/1024 [MB] (213 MBps) [2024-10-15T13:59:39.523Z] Copying: 443/1024 [MB] (230 MBps) [2024-10-15T13:59:40.456Z] Copying: 677/1024 [MB] (234 MBps) [2024-10-15T13:59:41.389Z] Copying: 892/1024 [MB] (215 MBps) [2024-10-15T13:59:41.954Z] Copying: 1024/1024 [MB] (average 221 MBps) 00:25:28.166 00:25:28.166 Calculate MD5 checksum, iteration 2 00:25:28.166 13:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:28.166 13:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:28.166 13:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:28.166 13:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:28.166 13:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:28.166 13:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:28.167 13:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:28.167 13:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:28.167 [2024-10-15 13:59:41.732495] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:25:28.167 [2024-10-15 13:59:41.732785] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78627 ] 00:25:28.167 [2024-10-15 13:59:41.882132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.425 [2024-10-15 13:59:41.964183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.799  [2024-10-15T13:59:44.154Z] Copying: 645/1024 [MB] (645 MBps) [2024-10-15T13:59:45.089Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:25:31.301 00:25:31.301 13:59:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:31.301 13:59:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:33.203 13:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:33.203 13:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7b36a50b6e34b177a791aaee6d1cfac1 00:25:33.203 13:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:33.203 13:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:33.203 13:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:33.461 [2024-10-15 13:59:47.045449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:33.461 [2024-10-15 13:59:47.045497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:33.461 [2024-10-15 13:59:47.045508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:33.461 [2024-10-15 13:59:47.045515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:33.461 [2024-10-15 13:59:47.045534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:33.461 [2024-10-15 13:59:47.045541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:33.461 [2024-10-15 13:59:47.045548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:33.461 [2024-10-15 13:59:47.045553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:33.461 [2024-10-15 13:59:47.045571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:33.461 [2024-10-15 13:59:47.045577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:33.461 [2024-10-15 13:59:47.045584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:33.461 [2024-10-15 13:59:47.045589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:33.461 [2024-10-15 13:59:47.045636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.183 ms, result 0 00:25:33.461 true 00:25:33.461 13:59:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:33.720 { 00:25:33.720 "name": "ftl", 00:25:33.720 "properties": [ 00:25:33.720 { 00:25:33.720 "name": "superblock_version", 00:25:33.720 "value": 5, 00:25:33.720 "read-only": true 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "name": "base_device", 00:25:33.720 "bands": [ 00:25:33.720 { 00:25:33.720 "id": 0, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 1, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 2, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 3, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 4, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 5, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 6, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 7, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 8, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 9, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 10, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 11, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 12, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 13, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 14, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 15, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 16, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 17, 00:25:33.720 "state": "FREE", 00:25:33.720 "validity": 0.0 00:25:33.720 } 00:25:33.720 ], 00:25:33.720 "read-only": true 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "name": "cache_device", 00:25:33.720 "type": "bdev", 00:25:33.720 "chunks": [ 00:25:33.720 { 00:25:33.720 "id": 0, 00:25:33.720 "state": "INACTIVE", 00:25:33.720 "utilization": 0.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 1, 00:25:33.720 "state": "CLOSED", 00:25:33.720 "utilization": 1.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 2, 00:25:33.720 "state": "CLOSED", 00:25:33.720 "utilization": 1.0 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 3, 00:25:33.720 "state": "OPEN", 00:25:33.720 "utilization": 0.001953125 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "id": 4, 00:25:33.720 "state": "OPEN", 00:25:33.720 "utilization": 0.0 00:25:33.720 } 00:25:33.720 ], 00:25:33.720 "read-only": true 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "name": "verbose_mode", 00:25:33.720 "value": true, 00:25:33.720 "unit": "", 00:25:33.720 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:33.720 }, 00:25:33.720 { 00:25:33.720 "name": "prep_upgrade_on_shutdown", 00:25:33.720 "value": false, 00:25:33.720 "unit": "", 00:25:33.720 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:33.720 } 00:25:33.720 ] 00:25:33.720 } 00:25:33.720 13:59:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:33.720 [2024-10-15 13:59:47.457799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:33.720 [2024-10-15 13:59:47.457837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:33.720 [2024-10-15 13:59:47.457846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:33.720 [2024-10-15 13:59:47.457852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:33.720 [2024-10-15 13:59:47.457869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:33.720 [2024-10-15 13:59:47.457876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:33.720 [2024-10-15 13:59:47.457882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:33.720 [2024-10-15 13:59:47.457887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:33.720 [2024-10-15 13:59:47.457902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:33.720 [2024-10-15 13:59:47.457909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:33.720 [2024-10-15 13:59:47.457915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:33.720 [2024-10-15 13:59:47.457920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:33.720 [2024-10-15 13:59:47.457963] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.157 ms, result 0 00:25:33.720 true 00:25:33.720 13:59:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:33.720 13:59:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:33.720 13:59:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:33.979 13:59:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:33.979 13:59:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:33.979 13:59:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:34.237 [2024-10-15 13:59:47.878166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.237 [2024-10-15 13:59:47.878208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:34.237 [2024-10-15 13:59:47.878217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:34.237 [2024-10-15 13:59:47.878243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.237 [2024-10-15 13:59:47.878261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.237 [2024-10-15 13:59:47.878268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:34.237 [2024-10-15 13:59:47.878273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:34.237 [2024-10-15 13:59:47.878279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.237 [2024-10-15 13:59:47.878294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.237 [2024-10-15 13:59:47.878299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:34.237 [2024-10-15 13:59:47.878306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:34.237 [2024-10-15 13:59:47.878311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.237 [2024-10-15 13:59:47.878355] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.181 ms, result 0 00:25:34.237 true 00:25:34.237 13:59:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:34.495 { 00:25:34.495 "name": "ftl", 00:25:34.495 "properties": [ 00:25:34.495 { 00:25:34.495 "name": "superblock_version", 00:25:34.495 "value": 5, 00:25:34.495 "read-only": true 00:25:34.495 }, 00:25:34.495 { 00:25:34.495 "name": "base_device", 00:25:34.495 "bands": [ 00:25:34.495 { 00:25:34.495 "id": 0, 00:25:34.495 "state": "FREE", 00:25:34.495 "validity": 0.0 00:25:34.495 }, 00:25:34.495 { 00:25:34.495 "id": 1, 00:25:34.495 "state": "FREE", 00:25:34.495 "validity": 0.0 00:25:34.495 }, 00:25:34.495 { 00:25:34.495 "id": 2, 00:25:34.495 "state": "FREE", 00:25:34.495 "validity": 0.0 00:25:34.495 }, 00:25:34.495 { 00:25:34.495 "id": 3, 00:25:34.495 "state": "FREE", 00:25:34.495 "validity": 0.0 00:25:34.495 }, 00:25:34.495 { 00:25:34.495 "id": 4, 00:25:34.495 "state": "FREE", 00:25:34.495 "validity": 0.0 00:25:34.495 }, 00:25:34.495 { 00:25:34.495 "id": 5, 00:25:34.495 "state": "FREE", 00:25:34.495 "validity": 0.0 00:25:34.495 }, 00:25:34.495 { 00:25:34.495 "id": 6, 00:25:34.495 "state": "FREE", 00:25:34.495 "validity": 0.0 00:25:34.495 }, 00:25:34.495 { 00:25:34.495 "id": 7, 00:25:34.495 "state": "FREE", 00:25:34.495 "validity": 0.0 00:25:34.495 }, 00:25:34.495 { 00:25:34.495 "id": 8, 00:25:34.495 "state": "FREE", 00:25:34.495 "validity": 0.0 00:25:34.495 }, 00:25:34.495 { 00:25:34.495 "id": 9, 00:25:34.495 "state": "FREE", 00:25:34.495 "validity": 0.0 00:25:34.495 }, 00:25:34.495 { 00:25:34.495 "id": 10, 00:25:34.495 "state": "FREE", 00:25:34.495 "validity": 0.0 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "id": 11, 00:25:34.496 "state": "FREE", 00:25:34.496 "validity": 0.0 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "id": 12, 00:25:34.496 "state": "FREE", 00:25:34.496 "validity": 0.0 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "id": 13, 00:25:34.496 "state": "FREE", 00:25:34.496 "validity": 0.0 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "id": 14, 00:25:34.496 "state": "FREE", 00:25:34.496 "validity": 0.0 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "id": 15, 00:25:34.496 "state": "FREE", 00:25:34.496 "validity": 0.0 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "id": 16, 00:25:34.496 "state": "FREE", 00:25:34.496 "validity": 0.0 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "id": 17, 00:25:34.496 "state": "FREE", 00:25:34.496 "validity": 0.0 00:25:34.496 } 00:25:34.496 ], 00:25:34.496 "read-only": true 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "name": "cache_device", 00:25:34.496 "type": "bdev", 00:25:34.496 "chunks": [ 00:25:34.496 { 00:25:34.496 "id": 0, 00:25:34.496 "state": "INACTIVE", 00:25:34.496 "utilization": 0.0 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "id": 1, 00:25:34.496 "state": "CLOSED", 00:25:34.496 "utilization": 1.0 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "id": 2, 00:25:34.496 "state": "CLOSED", 00:25:34.496 "utilization": 1.0 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "id": 3, 00:25:34.496 "state": "OPEN", 00:25:34.496 "utilization": 0.001953125 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "id": 4, 00:25:34.496 "state": "OPEN", 00:25:34.496 "utilization": 0.0 00:25:34.496 } 00:25:34.496 ], 00:25:34.496 "read-only": true 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "name": "verbose_mode", 00:25:34.496 "value": true, 00:25:34.496 "unit": "", 00:25:34.496 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:34.496 }, 00:25:34.496 { 00:25:34.496 "name": "prep_upgrade_on_shutdown", 00:25:34.496 "value": true, 00:25:34.496 "unit": "", 00:25:34.496 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:34.496 } 00:25:34.496 ] 00:25:34.496 } 00:25:34.496 13:59:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:34.496 13:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 78296 ]] 00:25:34.496 13:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 78296 00:25:34.496 13:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78296 ']' 00:25:34.496 13:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 78296 00:25:34.496 13:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:25:34.496 13:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:34.496 13:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78296 00:25:34.496 killing process with pid 78296 00:25:34.496 13:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:34.496 13:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:34.496 13:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78296' 00:25:34.496 13:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 78296 00:25:34.496 13:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 78296 00:25:35.062 [2024-10-15 13:59:48.660288] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:35.062 [2024-10-15 13:59:48.671516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.062 [2024-10-15 13:59:48.671556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:35.062 [2024-10-15 13:59:48.671566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:35.062 [2024-10-15 13:59:48.671573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.062 [2024-10-15 13:59:48.671591] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:35.062 [2024-10-15 13:59:48.673654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.062 [2024-10-15 13:59:48.673679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:35.062 [2024-10-15 13:59:48.673687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.052 ms 00:25:35.062 [2024-10-15 13:59:48.673694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.224 [2024-10-15 13:59:55.947462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.224 [2024-10-15 13:59:55.947521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:43.224 [2024-10-15 13:59:55.947533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7273.726 ms 00:25:43.224 [2024-10-15 13:59:55.947540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:55.948488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.225 [2024-10-15 13:59:55.948516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:43.225 [2024-10-15 13:59:55.948525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.935 ms 00:25:43.225 [2024-10-15 13:59:55.948532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:55.949429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.225 [2024-10-15 13:59:55.949447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:43.225 [2024-10-15 13:59:55.949456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.875 ms 00:25:43.225 [2024-10-15 13:59:55.949462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:55.957524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.225 [2024-10-15 13:59:55.957564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:43.225 [2024-10-15 13:59:55.957571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.018 ms 00:25:43.225 [2024-10-15 13:59:55.957578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:55.962972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.225 [2024-10-15 13:59:55.963009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:43.225 [2024-10-15 13:59:55.963018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.364 ms 00:25:43.225 [2024-10-15 13:59:55.963025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:55.963086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.225 [2024-10-15 13:59:55.963095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:43.225 [2024-10-15 13:59:55.963102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:25:43.225 [2024-10-15 13:59:55.963109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:55.970502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.225 [2024-10-15 13:59:55.970533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:25:43.225 [2024-10-15 13:59:55.970541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.375 ms 00:25:43.225 [2024-10-15 13:59:55.970546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:55.977728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.225 [2024-10-15 13:59:55.977759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:25:43.225 [2024-10-15 13:59:55.977766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.154 ms 00:25:43.225 [2024-10-15 13:59:55.977772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:55.984814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.225 [2024-10-15 13:59:55.984843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:43.225 [2024-10-15 13:59:55.984850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.015 ms 00:25:43.225 [2024-10-15 13:59:55.984856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:55.992120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.225 [2024-10-15 13:59:55.992149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:43.225 [2024-10-15 13:59:55.992156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.207 ms 00:25:43.225 [2024-10-15 13:59:55.992162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:55.992189] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:43.225 [2024-10-15 13:59:55.992203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:43.225 [2024-10-15 13:59:55.992212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:43.225 [2024-10-15 13:59:55.992238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:43.225 [2024-10-15 13:59:55.992245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:43.225 [2024-10-15 13:59:55.992342] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:43.225 [2024-10-15 13:59:55.992349] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: fc31afad-a5b8-407a-8be2-9affb9279993 00:25:43.225 [2024-10-15 13:59:55.992355] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:43.225 [2024-10-15 13:59:55.992361] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:25:43.225 [2024-10-15 13:59:55.992368] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:25:43.225 [2024-10-15 13:59:55.992375] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:25:43.225 [2024-10-15 13:59:55.992380] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:43.225 [2024-10-15 13:59:55.992387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:43.225 [2024-10-15 13:59:55.992393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:43.225 [2024-10-15 13:59:55.992398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:43.225 [2024-10-15 13:59:55.992404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:43.225 [2024-10-15 13:59:55.992410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.225 [2024-10-15 13:59:55.992419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:43.225 [2024-10-15 13:59:55.992426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.222 ms 00:25:43.225 [2024-10-15 13:59:55.992435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:56.002232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.225 [2024-10-15 13:59:56.002263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:43.225 [2024-10-15 13:59:56.002273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.783 ms 00:25:43.225 [2024-10-15 13:59:56.002280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:56.002567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:43.225 [2024-10-15 13:59:56.002581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:43.225 [2024-10-15 13:59:56.002589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.267 ms 00:25:43.225 [2024-10-15 13:59:56.002595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:56.035895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:43.225 [2024-10-15 13:59:56.035941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:43.225 [2024-10-15 13:59:56.035951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:43.225 [2024-10-15 13:59:56.035957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:56.035998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:43.225 [2024-10-15 13:59:56.036006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:43.225 [2024-10-15 13:59:56.036012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:43.225 [2024-10-15 13:59:56.036018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:56.036096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:43.225 [2024-10-15 13:59:56.036104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:43.225 [2024-10-15 13:59:56.036112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:43.225 [2024-10-15 13:59:56.036118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:56.036132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:43.225 [2024-10-15 13:59:56.036142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:43.225 [2024-10-15 13:59:56.036149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:43.225 [2024-10-15 13:59:56.036155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:56.097460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:43.225 [2024-10-15 13:59:56.097505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:43.225 [2024-10-15 13:59:56.097515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:43.225 [2024-10-15 13:59:56.097522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:56.146440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:43.225 [2024-10-15 13:59:56.146489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:43.225 [2024-10-15 13:59:56.146499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:43.225 [2024-10-15 13:59:56.146506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:56.146567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:43.225 [2024-10-15 13:59:56.146576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:43.225 [2024-10-15 13:59:56.146583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:43.225 [2024-10-15 13:59:56.146589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:56.146632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:43.225 [2024-10-15 13:59:56.146640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:43.225 [2024-10-15 13:59:56.146650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:43.225 [2024-10-15 13:59:56.146656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.225 [2024-10-15 13:59:56.146726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:43.225 [2024-10-15 13:59:56.146736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:43.225 [2024-10-15 13:59:56.146742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:43.225 [2024-10-15 13:59:56.146748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.226 [2024-10-15 13:59:56.146772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:43.226 [2024-10-15 13:59:56.146780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:43.226 [2024-10-15 13:59:56.146787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:43.226 [2024-10-15 13:59:56.146795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.226 [2024-10-15 13:59:56.146826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:43.226 [2024-10-15 13:59:56.146833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:43.226 [2024-10-15 13:59:56.146840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:43.226 [2024-10-15 13:59:56.146846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.226 [2024-10-15 13:59:56.146880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:43.226 [2024-10-15 13:59:56.146895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:43.226 [2024-10-15 13:59:56.146905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:43.226 [2024-10-15 13:59:56.146912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:43.226 [2024-10-15 13:59:56.147003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7475.447 ms, result 0 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78805 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78805 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78805 ']' 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:46.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:46.506 13:59:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:46.506 [2024-10-15 13:59:59.723550] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:25:46.506 [2024-10-15 13:59:59.723683] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78805 ] 00:25:46.506 [2024-10-15 13:59:59.873984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.506 [2024-10-15 13:59:59.955852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.764 [2024-10-15 14:00:00.550439] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:46.764 [2024-10-15 14:00:00.550493] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:47.024 [2024-10-15 14:00:00.694443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.024 [2024-10-15 14:00:00.694493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:47.024 [2024-10-15 14:00:00.694503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:47.024 [2024-10-15 14:00:00.694510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.024 [2024-10-15 14:00:00.694557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.024 [2024-10-15 14:00:00.694565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:47.024 [2024-10-15 14:00:00.694571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:25:47.024 [2024-10-15 14:00:00.694577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.024 [2024-10-15 14:00:00.694595] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:47.024 [2024-10-15 14:00:00.695131] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:47.024 [2024-10-15 14:00:00.695147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.024 [2024-10-15 14:00:00.695154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:47.024 [2024-10-15 14:00:00.695160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.559 ms 00:25:47.024 [2024-10-15 14:00:00.695167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.024 [2024-10-15 14:00:00.696263] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:47.024 [2024-10-15 14:00:00.706105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.024 [2024-10-15 14:00:00.706135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:47.024 [2024-10-15 14:00:00.706143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.843 ms 00:25:47.024 [2024-10-15 14:00:00.706154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.024 [2024-10-15 14:00:00.706206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.024 [2024-10-15 14:00:00.706214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:47.024 [2024-10-15 14:00:00.706238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:25:47.024 [2024-10-15 14:00:00.706244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.024 [2024-10-15 14:00:00.711059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.024 [2024-10-15 14:00:00.711086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:47.024 [2024-10-15 14:00:00.711096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.761 ms 00:25:47.024 [2024-10-15 14:00:00.711103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.024 [2024-10-15 14:00:00.711154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.024 [2024-10-15 14:00:00.711162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:47.024 [2024-10-15 14:00:00.711169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:25:47.024 [2024-10-15 14:00:00.711175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.024 [2024-10-15 14:00:00.711216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.024 [2024-10-15 14:00:00.711236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:47.024 [2024-10-15 14:00:00.711243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:25:47.024 [2024-10-15 14:00:00.711251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.024 [2024-10-15 14:00:00.711270] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:47.024 [2024-10-15 14:00:00.714079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.024 [2024-10-15 14:00:00.714102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:47.024 [2024-10-15 14:00:00.714110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.814 ms 00:25:47.024 [2024-10-15 14:00:00.714116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.024 [2024-10-15 14:00:00.714143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.024 [2024-10-15 14:00:00.714150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:47.024 [2024-10-15 14:00:00.714156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:47.024 [2024-10-15 14:00:00.714162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.024 [2024-10-15 14:00:00.714181] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:47.024 [2024-10-15 14:00:00.714195] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:25:47.024 [2024-10-15 14:00:00.714236] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:47.024 [2024-10-15 14:00:00.714248] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:25:47.024 [2024-10-15 14:00:00.714330] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:47.024 [2024-10-15 14:00:00.714339] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:47.024 [2024-10-15 14:00:00.714347] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:47.024 [2024-10-15 14:00:00.714354] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:47.024 [2024-10-15 14:00:00.714361] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:47.024 [2024-10-15 14:00:00.714368] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:47.024 [2024-10-15 14:00:00.714376] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:47.024 [2024-10-15 14:00:00.714382] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:47.024 [2024-10-15 14:00:00.714388] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:47.024 [2024-10-15 14:00:00.714395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.024 [2024-10-15 14:00:00.714400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:47.024 [2024-10-15 14:00:00.714406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.216 ms 00:25:47.024 [2024-10-15 14:00:00.714412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.024 [2024-10-15 14:00:00.714480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.024 [2024-10-15 14:00:00.714486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:47.024 [2024-10-15 14:00:00.714492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:25:47.024 [2024-10-15 14:00:00.714497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.024 [2024-10-15 14:00:00.714577] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:47.024 [2024-10-15 14:00:00.714584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:47.024 [2024-10-15 14:00:00.714591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:47.024 [2024-10-15 14:00:00.714597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:47.024 [2024-10-15 14:00:00.714603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:47.024 [2024-10-15 14:00:00.714608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:47.024 [2024-10-15 14:00:00.714613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:47.024 [2024-10-15 14:00:00.714619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:47.024 [2024-10-15 14:00:00.714625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:47.024 [2024-10-15 14:00:00.714630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:47.024 [2024-10-15 14:00:00.714636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:47.024 [2024-10-15 14:00:00.714642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:47.024 [2024-10-15 14:00:00.714647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:47.024 [2024-10-15 14:00:00.714652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:47.024 [2024-10-15 14:00:00.714658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:47.024 [2024-10-15 14:00:00.714664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:47.024 [2024-10-15 14:00:00.714669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:47.024 [2024-10-15 14:00:00.714674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:47.024 [2024-10-15 14:00:00.714680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:47.024 [2024-10-15 14:00:00.714685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:47.024 [2024-10-15 14:00:00.714691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:47.024 [2024-10-15 14:00:00.714697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:47.024 [2024-10-15 14:00:00.714702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:47.024 [2024-10-15 14:00:00.714707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:47.024 [2024-10-15 14:00:00.714713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:47.024 [2024-10-15 14:00:00.714723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:47.024 [2024-10-15 14:00:00.714729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:47.024 [2024-10-15 14:00:00.714734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:47.024 [2024-10-15 14:00:00.714739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:47.024 [2024-10-15 14:00:00.714744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:47.024 [2024-10-15 14:00:00.714749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:47.024 [2024-10-15 14:00:00.714755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:47.024 [2024-10-15 14:00:00.714760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:47.024 [2024-10-15 14:00:00.714765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:47.024 [2024-10-15 14:00:00.714770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:47.024 [2024-10-15 14:00:00.714775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:47.024 [2024-10-15 14:00:00.714780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:47.024 [2024-10-15 14:00:00.714785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:47.024 [2024-10-15 14:00:00.714790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:47.024 [2024-10-15 14:00:00.714796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:47.024 [2024-10-15 14:00:00.714800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:47.024 [2024-10-15 14:00:00.714805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:47.024 [2024-10-15 14:00:00.714811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:47.024 [2024-10-15 14:00:00.714816] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:47.025 [2024-10-15 14:00:00.714822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:47.025 [2024-10-15 14:00:00.714828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:47.025 [2024-10-15 14:00:00.714838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:47.025 [2024-10-15 14:00:00.714844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:47.025 [2024-10-15 14:00:00.714850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:47.025 [2024-10-15 14:00:00.714855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:47.025 [2024-10-15 14:00:00.714860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:47.025 [2024-10-15 14:00:00.714865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:47.025 [2024-10-15 14:00:00.714870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:47.025 [2024-10-15 14:00:00.714877] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:47.025 [2024-10-15 14:00:00.714886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.025 [2024-10-15 14:00:00.714892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:47.025 [2024-10-15 14:00:00.714898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:47.025 [2024-10-15 14:00:00.714903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:47.025 [2024-10-15 14:00:00.714909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:47.025 [2024-10-15 14:00:00.714914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:47.025 [2024-10-15 14:00:00.714920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:47.025 [2024-10-15 14:00:00.714925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:47.025 [2024-10-15 14:00:00.714931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:47.025 [2024-10-15 14:00:00.714936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:47.025 [2024-10-15 14:00:00.714942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:47.025 [2024-10-15 14:00:00.714947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:47.025 [2024-10-15 14:00:00.714953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:47.025 [2024-10-15 14:00:00.714958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:47.025 [2024-10-15 14:00:00.714964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:47.025 [2024-10-15 14:00:00.714969] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:47.025 [2024-10-15 14:00:00.714976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.025 [2024-10-15 14:00:00.714982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:47.025 [2024-10-15 14:00:00.714988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:47.025 [2024-10-15 14:00:00.714994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:47.025 [2024-10-15 14:00:00.715000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:47.025 [2024-10-15 14:00:00.715005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.025 [2024-10-15 14:00:00.715011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:47.025 [2024-10-15 14:00:00.715017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.482 ms 00:25:47.025 [2024-10-15 14:00:00.715025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.025 [2024-10-15 14:00:00.715057] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:47.025 [2024-10-15 14:00:00.715065] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:50.303 [2024-10-15 14:00:03.506840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.303 [2024-10-15 14:00:03.506891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:50.303 [2024-10-15 14:00:03.506904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2791.772 ms 00:25:50.303 [2024-10-15 14:00:03.506911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.303 [2024-10-15 14:00:03.528891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.303 [2024-10-15 14:00:03.528937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:50.303 [2024-10-15 14:00:03.528949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.785 ms 00:25:50.303 [2024-10-15 14:00:03.528955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.303 [2024-10-15 14:00:03.529048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.303 [2024-10-15 14:00:03.529057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:50.303 [2024-10-15 14:00:03.529069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:25:50.303 [2024-10-15 14:00:03.529075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.303 [2024-10-15 14:00:03.554146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.303 [2024-10-15 14:00:03.554191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:50.303 [2024-10-15 14:00:03.554200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.038 ms 00:25:50.303 [2024-10-15 14:00:03.554207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.303 [2024-10-15 14:00:03.554256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.554263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:50.304 [2024-10-15 14:00:03.554270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:50.304 [2024-10-15 14:00:03.554276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.554649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.554671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:50.304 [2024-10-15 14:00:03.554678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 00:25:50.304 [2024-10-15 14:00:03.554684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.554725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.554735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:50.304 [2024-10-15 14:00:03.554742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:50.304 [2024-10-15 14:00:03.554748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.567007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.567052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:50.304 [2024-10-15 14:00:03.567062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.236 ms 00:25:50.304 [2024-10-15 14:00:03.567069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.577416] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:50.304 [2024-10-15 14:00:03.577460] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:50.304 [2024-10-15 14:00:03.577471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.577478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:25:50.304 [2024-10-15 14:00:03.577487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.288 ms 00:25:50.304 [2024-10-15 14:00:03.577493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.588661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.588707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:25:50.304 [2024-10-15 14:00:03.588718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.117 ms 00:25:50.304 [2024-10-15 14:00:03.588725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.598086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.598125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:25:50.304 [2024-10-15 14:00:03.598134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.302 ms 00:25:50.304 [2024-10-15 14:00:03.598140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.607429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.607468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:25:50.304 [2024-10-15 14:00:03.607477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.244 ms 00:25:50.304 [2024-10-15 14:00:03.607483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.608021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.608041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:50.304 [2024-10-15 14:00:03.608051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.447 ms 00:25:50.304 [2024-10-15 14:00:03.608057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.672206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.672263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:50.304 [2024-10-15 14:00:03.672281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 64.129 ms 00:25:50.304 [2024-10-15 14:00:03.672288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.681122] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:50.304 [2024-10-15 14:00:03.681946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.681972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:50.304 [2024-10-15 14:00:03.681982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.589 ms 00:25:50.304 [2024-10-15 14:00:03.681988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.682075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.682084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:25:50.304 [2024-10-15 14:00:03.682093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:25:50.304 [2024-10-15 14:00:03.682099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.682138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.682146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:50.304 [2024-10-15 14:00:03.682153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:25:50.304 [2024-10-15 14:00:03.682159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.682176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.682183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:50.304 [2024-10-15 14:00:03.682189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:50.304 [2024-10-15 14:00:03.682195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.682236] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:50.304 [2024-10-15 14:00:03.682245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.682250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:50.304 [2024-10-15 14:00:03.682257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:25:50.304 [2024-10-15 14:00:03.682263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.700812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.700849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:50.304 [2024-10-15 14:00:03.700865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.533 ms 00:25:50.304 [2024-10-15 14:00:03.700876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.700948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.304 [2024-10-15 14:00:03.700956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:50.304 [2024-10-15 14:00:03.700963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:25:50.304 [2024-10-15 14:00:03.700969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.304 [2024-10-15 14:00:03.701826] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3007.014 ms, result 0 00:25:50.304 [2024-10-15 14:00:03.717262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.304 [2024-10-15 14:00:03.733287] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:50.304 [2024-10-15 14:00:03.742347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:50.304 14:00:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:50.304 14:00:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:50.304 14:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:50.304 14:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:25:50.304 14:00:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:50.562 [2024-10-15 14:00:04.126625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.562 [2024-10-15 14:00:04.126673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:50.562 [2024-10-15 14:00:04.126686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:50.562 [2024-10-15 14:00:04.126694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.562 [2024-10-15 14:00:04.126720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.562 [2024-10-15 14:00:04.126728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:50.562 [2024-10-15 14:00:04.126738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:50.562 [2024-10-15 14:00:04.126745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.562 [2024-10-15 14:00:04.126764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.562 [2024-10-15 14:00:04.126773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:50.562 [2024-10-15 14:00:04.126780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:50.562 [2024-10-15 14:00:04.126788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.562 [2024-10-15 14:00:04.126846] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.216 ms, result 0 00:25:50.562 true 00:25:50.562 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:50.562 { 00:25:50.562 "name": "ftl", 00:25:50.562 "properties": [ 00:25:50.562 { 00:25:50.562 "name": "superblock_version", 00:25:50.562 "value": 5, 00:25:50.562 "read-only": true 00:25:50.562 }, 00:25:50.562 { 00:25:50.562 "name": "base_device", 00:25:50.562 "bands": [ 00:25:50.562 { 00:25:50.562 "id": 0, 00:25:50.563 "state": "CLOSED", 00:25:50.563 "validity": 1.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 1, 00:25:50.563 "state": "CLOSED", 00:25:50.563 "validity": 1.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 2, 00:25:50.563 "state": "CLOSED", 00:25:50.563 "validity": 0.007843137254901933 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 3, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 4, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 5, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 6, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 7, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 8, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 9, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 10, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 11, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 12, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 13, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 14, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 15, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 16, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 17, 00:25:50.563 "state": "FREE", 00:25:50.563 "validity": 0.0 00:25:50.563 } 00:25:50.563 ], 00:25:50.563 "read-only": true 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "name": "cache_device", 00:25:50.563 "type": "bdev", 00:25:50.563 "chunks": [ 00:25:50.563 { 00:25:50.563 "id": 0, 00:25:50.563 "state": "INACTIVE", 00:25:50.563 "utilization": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 1, 00:25:50.563 "state": "OPEN", 00:25:50.563 "utilization": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 2, 00:25:50.563 "state": "OPEN", 00:25:50.563 "utilization": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 3, 00:25:50.563 "state": "FREE", 00:25:50.563 "utilization": 0.0 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "id": 4, 00:25:50.563 "state": "FREE", 00:25:50.563 "utilization": 0.0 00:25:50.563 } 00:25:50.563 ], 00:25:50.563 "read-only": true 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "name": "verbose_mode", 00:25:50.563 "value": true, 00:25:50.563 "unit": "", 00:25:50.563 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:50.563 }, 00:25:50.563 { 00:25:50.563 "name": "prep_upgrade_on_shutdown", 00:25:50.563 "value": false, 00:25:50.563 "unit": "", 00:25:50.563 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:50.563 } 00:25:50.563 ] 00:25:50.563 } 00:25:50.821 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:25:50.821 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:50.821 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:50.821 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:25:50.821 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:25:50.821 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:25:50.821 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:25:50.821 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:51.080 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:25:51.080 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:25:51.080 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:25:51.080 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:51.080 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:51.080 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:51.080 Validate MD5 checksum, iteration 1 00:25:51.080 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:51.080 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:51.080 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:51.080 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:51.080 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:51.080 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:51.080 14:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:51.080 [2024-10-15 14:00:04.803438] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:25:51.080 [2024-10-15 14:00:04.803552] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78874 ] 00:25:51.338 [2024-10-15 14:00:04.951280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.338 [2024-10-15 14:00:05.036538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.712  [2024-10-15T14:00:07.067Z] Copying: 676/1024 [MB] (676 MBps) [2024-10-15T14:00:08.001Z] Copying: 1024/1024 [MB] (average 671 MBps) 00:25:54.213 00:25:54.213 14:00:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:54.213 14:00:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:56.770 14:00:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:56.770 14:00:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dfa2aee09e5d74b663e6e69eecf50034 00:25:56.770 14:00:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dfa2aee09e5d74b663e6e69eecf50034 != \d\f\a\2\a\e\e\0\9\e\5\d\7\4\b\6\6\3\e\6\e\6\9\e\e\c\f\5\0\0\3\4 ]] 00:25:56.770 14:00:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:56.770 14:00:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:56.770 Validate MD5 checksum, iteration 2 00:25:56.770 14:00:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:56.770 14:00:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:56.770 14:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:56.770 14:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:56.770 14:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:56.770 14:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:56.770 14:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:56.770 [2024-10-15 14:00:10.194942] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:25:56.770 [2024-10-15 14:00:10.195042] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78936 ] 00:25:56.770 [2024-10-15 14:00:10.340651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.770 [2024-10-15 14:00:10.443491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.671  [2024-10-15T14:00:12.459Z] Copying: 682/1024 [MB] (682 MBps) [2024-10-15T14:00:16.641Z] Copying: 1024/1024 [MB] (average 682 MBps) 00:26:02.853 00:26:02.853 14:00:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:02.853 14:00:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7b36a50b6e34b177a791aaee6d1cfac1 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7b36a50b6e34b177a791aaee6d1cfac1 != \7\b\3\6\a\5\0\b\6\e\3\4\b\1\7\7\a\7\9\1\a\a\e\e\6\d\1\c\f\a\c\1 ]] 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 78805 ]] 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 78805 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79031 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79031 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 79031 ']' 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:05.426 14:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:05.426 [2024-10-15 14:00:18.703669] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:26:05.426 [2024-10-15 14:00:18.703815] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79031 ] 00:26:05.426 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 78805 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:05.426 [2024-10-15 14:00:18.849777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.426 [2024-10-15 14:00:18.934245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.994 [2024-10-15 14:00:19.512655] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:05.994 [2024-10-15 14:00:19.512712] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:05.994 [2024-10-15 14:00:19.655896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.994 [2024-10-15 14:00:19.655949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:05.994 [2024-10-15 14:00:19.655960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:05.994 [2024-10-15 14:00:19.655967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.994 [2024-10-15 14:00:19.656010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.994 [2024-10-15 14:00:19.656018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:05.994 [2024-10-15 14:00:19.656024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:05.994 [2024-10-15 14:00:19.656030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.994 [2024-10-15 14:00:19.656049] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:05.994 [2024-10-15 14:00:19.656599] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:05.994 [2024-10-15 14:00:19.656618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.994 [2024-10-15 14:00:19.656624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:05.994 [2024-10-15 14:00:19.656631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.576 ms 00:26:05.994 [2024-10-15 14:00:19.656637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.994 [2024-10-15 14:00:19.656911] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:05.994 [2024-10-15 14:00:19.669538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.994 [2024-10-15 14:00:19.669582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:05.994 [2024-10-15 14:00:19.669593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.626 ms 00:26:05.994 [2024-10-15 14:00:19.669600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.994 [2024-10-15 14:00:19.676658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.994 [2024-10-15 14:00:19.676693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:05.994 [2024-10-15 14:00:19.676702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:05.994 [2024-10-15 14:00:19.676710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.994 [2024-10-15 14:00:19.676975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.994 [2024-10-15 14:00:19.676990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:05.994 [2024-10-15 14:00:19.676998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:26:05.994 [2024-10-15 14:00:19.677004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.994 [2024-10-15 14:00:19.677044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.994 [2024-10-15 14:00:19.677051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:05.994 [2024-10-15 14:00:19.677060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:05.994 [2024-10-15 14:00:19.677066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.994 [2024-10-15 14:00:19.677089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.994 [2024-10-15 14:00:19.677096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:05.994 [2024-10-15 14:00:19.677102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:05.994 [2024-10-15 14:00:19.677108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.994 [2024-10-15 14:00:19.677126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:05.994 [2024-10-15 14:00:19.679593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.994 [2024-10-15 14:00:19.679618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:05.994 [2024-10-15 14:00:19.679625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.472 ms 00:26:05.994 [2024-10-15 14:00:19.679631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.994 [2024-10-15 14:00:19.679657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.994 [2024-10-15 14:00:19.679665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:05.994 [2024-10-15 14:00:19.679671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:05.994 [2024-10-15 14:00:19.679677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.994 [2024-10-15 14:00:19.679709] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:05.994 [2024-10-15 14:00:19.679726] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:05.994 [2024-10-15 14:00:19.679754] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:05.994 [2024-10-15 14:00:19.679765] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:05.994 [2024-10-15 14:00:19.679846] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:05.994 [2024-10-15 14:00:19.679853] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:05.994 [2024-10-15 14:00:19.679861] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:05.994 [2024-10-15 14:00:19.679869] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:05.994 [2024-10-15 14:00:19.679876] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:05.994 [2024-10-15 14:00:19.679882] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:05.994 [2024-10-15 14:00:19.679888] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:05.994 [2024-10-15 14:00:19.679893] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:05.994 [2024-10-15 14:00:19.679899] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:05.994 [2024-10-15 14:00:19.679905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.995 [2024-10-15 14:00:19.679910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:05.995 [2024-10-15 14:00:19.679918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:26:05.995 [2024-10-15 14:00:19.679923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.995 [2024-10-15 14:00:19.679988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.995 [2024-10-15 14:00:19.679994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:05.995 [2024-10-15 14:00:19.680000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:26:05.995 [2024-10-15 14:00:19.680005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.995 [2024-10-15 14:00:19.680081] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:05.995 [2024-10-15 14:00:19.680094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:05.995 [2024-10-15 14:00:19.680102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:05.995 [2024-10-15 14:00:19.680110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:05.995 [2024-10-15 14:00:19.680116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:05.995 [2024-10-15 14:00:19.680122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:05.995 [2024-10-15 14:00:19.680127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:05.995 [2024-10-15 14:00:19.680132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:05.995 [2024-10-15 14:00:19.680137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:05.995 [2024-10-15 14:00:19.680143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:05.995 [2024-10-15 14:00:19.680150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:05.995 [2024-10-15 14:00:19.680156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:05.995 [2024-10-15 14:00:19.680161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:05.995 [2024-10-15 14:00:19.680166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:05.995 [2024-10-15 14:00:19.680171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:05.995 [2024-10-15 14:00:19.680177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:05.995 [2024-10-15 14:00:19.680182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:05.995 [2024-10-15 14:00:19.680187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:05.995 [2024-10-15 14:00:19.680192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:05.995 [2024-10-15 14:00:19.680197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:05.995 [2024-10-15 14:00:19.680202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:05.995 [2024-10-15 14:00:19.680207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:05.995 [2024-10-15 14:00:19.680212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:05.995 [2024-10-15 14:00:19.680231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:05.995 [2024-10-15 14:00:19.680236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:05.995 [2024-10-15 14:00:19.680241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:05.995 [2024-10-15 14:00:19.680247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:05.995 [2024-10-15 14:00:19.680253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:05.995 [2024-10-15 14:00:19.680258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:05.995 [2024-10-15 14:00:19.680263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:05.995 [2024-10-15 14:00:19.680268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:05.995 [2024-10-15 14:00:19.680273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:05.995 [2024-10-15 14:00:19.680278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:05.995 [2024-10-15 14:00:19.680284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:05.995 [2024-10-15 14:00:19.680289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:05.995 [2024-10-15 14:00:19.680294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:05.995 [2024-10-15 14:00:19.680299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:05.995 [2024-10-15 14:00:19.680304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:05.995 [2024-10-15 14:00:19.680309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:05.995 [2024-10-15 14:00:19.680314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:05.995 [2024-10-15 14:00:19.680319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:05.995 [2024-10-15 14:00:19.680324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:05.995 [2024-10-15 14:00:19.680329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:05.995 [2024-10-15 14:00:19.680334] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:05.995 [2024-10-15 14:00:19.680341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:05.995 [2024-10-15 14:00:19.680347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:05.995 [2024-10-15 14:00:19.680353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:05.995 [2024-10-15 14:00:19.680359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:05.995 [2024-10-15 14:00:19.680364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:05.995 [2024-10-15 14:00:19.680370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:05.995 [2024-10-15 14:00:19.680375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:05.995 [2024-10-15 14:00:19.680380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:05.995 [2024-10-15 14:00:19.680385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:05.995 [2024-10-15 14:00:19.680391] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:05.995 [2024-10-15 14:00:19.680398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.995 [2024-10-15 14:00:19.680405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:05.995 [2024-10-15 14:00:19.680410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:05.995 [2024-10-15 14:00:19.680415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:05.995 [2024-10-15 14:00:19.680421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:05.995 [2024-10-15 14:00:19.680426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:05.995 [2024-10-15 14:00:19.680432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:05.995 [2024-10-15 14:00:19.680437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:05.995 [2024-10-15 14:00:19.680442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:05.995 [2024-10-15 14:00:19.680447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:05.995 [2024-10-15 14:00:19.680453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:05.995 [2024-10-15 14:00:19.680459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:05.995 [2024-10-15 14:00:19.680464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:05.995 [2024-10-15 14:00:19.680469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:05.995 [2024-10-15 14:00:19.680475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:05.995 [2024-10-15 14:00:19.680480] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:05.995 [2024-10-15 14:00:19.680486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.995 [2024-10-15 14:00:19.680492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:05.995 [2024-10-15 14:00:19.680497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:05.995 [2024-10-15 14:00:19.680503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:05.995 [2024-10-15 14:00:19.680509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:05.995 [2024-10-15 14:00:19.680515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.995 [2024-10-15 14:00:19.680520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:05.995 [2024-10-15 14:00:19.680528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.486 ms 00:26:05.995 [2024-10-15 14:00:19.680534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.995 [2024-10-15 14:00:19.700249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.995 [2024-10-15 14:00:19.700282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:05.995 [2024-10-15 14:00:19.700291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.674 ms 00:26:05.995 [2024-10-15 14:00:19.700297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.995 [2024-10-15 14:00:19.700335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.995 [2024-10-15 14:00:19.700341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:05.995 [2024-10-15 14:00:19.700348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:05.995 [2024-10-15 14:00:19.700354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.995 [2024-10-15 14:00:19.724331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.995 [2024-10-15 14:00:19.724374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:05.995 [2024-10-15 14:00:19.724383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.925 ms 00:26:05.995 [2024-10-15 14:00:19.724389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.995 [2024-10-15 14:00:19.724421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.995 [2024-10-15 14:00:19.724427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:05.995 [2024-10-15 14:00:19.724434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:05.995 [2024-10-15 14:00:19.724440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.995 [2024-10-15 14:00:19.724528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.995 [2024-10-15 14:00:19.724536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:05.995 [2024-10-15 14:00:19.724543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:05.996 [2024-10-15 14:00:19.724548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.996 [2024-10-15 14:00:19.724578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.996 [2024-10-15 14:00:19.724584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:05.996 [2024-10-15 14:00:19.724590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:05.996 [2024-10-15 14:00:19.724596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.996 [2024-10-15 14:00:19.736104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.996 [2024-10-15 14:00:19.736138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:05.996 [2024-10-15 14:00:19.736146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.491 ms 00:26:05.996 [2024-10-15 14:00:19.736154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.996 [2024-10-15 14:00:19.736258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.996 [2024-10-15 14:00:19.736268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:05.996 [2024-10-15 14:00:19.736275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:05.996 [2024-10-15 14:00:19.736280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.996 [2024-10-15 14:00:19.759826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.996 [2024-10-15 14:00:19.759884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:05.996 [2024-10-15 14:00:19.759902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.528 ms 00:26:05.996 [2024-10-15 14:00:19.759913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.996 [2024-10-15 14:00:19.769043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.996 [2024-10-15 14:00:19.769076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:05.996 [2024-10-15 14:00:19.769084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.401 ms 00:26:05.996 [2024-10-15 14:00:19.769096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.254 [2024-10-15 14:00:19.813530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.254 [2024-10-15 14:00:19.813582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:06.254 [2024-10-15 14:00:19.813594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.379 ms 00:26:06.254 [2024-10-15 14:00:19.813604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.254 [2024-10-15 14:00:19.813734] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:06.254 [2024-10-15 14:00:19.813814] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:06.254 [2024-10-15 14:00:19.813887] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:06.254 [2024-10-15 14:00:19.813961] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:06.254 [2024-10-15 14:00:19.813970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.254 [2024-10-15 14:00:19.813976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:06.254 [2024-10-15 14:00:19.813983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.308 ms 00:26:06.254 [2024-10-15 14:00:19.813989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.254 [2024-10-15 14:00:19.814046] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:06.254 [2024-10-15 14:00:19.814055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.254 [2024-10-15 14:00:19.814061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:06.254 [2024-10-15 14:00:19.814067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:06.254 [2024-10-15 14:00:19.814076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.254 [2024-10-15 14:00:19.825659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.254 [2024-10-15 14:00:19.825696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:06.254 [2024-10-15 14:00:19.825708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.566 ms 00:26:06.254 [2024-10-15 14:00:19.825715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.254 [2024-10-15 14:00:19.832191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.254 [2024-10-15 14:00:19.832226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:06.254 [2024-10-15 14:00:19.832235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:06.254 [2024-10-15 14:00:19.832240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.254 [2024-10-15 14:00:19.832315] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:06.254 [2024-10-15 14:00:19.832432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.254 [2024-10-15 14:00:19.832447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:06.254 [2024-10-15 14:00:19.832457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.118 ms 00:26:06.254 [2024-10-15 14:00:19.832463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.513 [2024-10-15 14:00:20.218874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.513 [2024-10-15 14:00:20.218930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:06.513 [2024-10-15 14:00:20.218941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 385.788 ms 00:26:06.513 [2024-10-15 14:00:20.218948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.513 [2024-10-15 14:00:20.222380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.513 [2024-10-15 14:00:20.222413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:06.513 [2024-10-15 14:00:20.222422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.957 ms 00:26:06.513 [2024-10-15 14:00:20.222428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.513 [2024-10-15 14:00:20.222769] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:06.513 [2024-10-15 14:00:20.222801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.513 [2024-10-15 14:00:20.222808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:06.513 [2024-10-15 14:00:20.222815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.353 ms 00:26:06.513 [2024-10-15 14:00:20.222821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.513 [2024-10-15 14:00:20.222847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.513 [2024-10-15 14:00:20.222855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:06.513 [2024-10-15 14:00:20.222861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:06.513 [2024-10-15 14:00:20.222867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.514 [2024-10-15 14:00:20.222896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 390.580 ms, result 0 00:26:06.514 [2024-10-15 14:00:20.222926] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:06.514 [2024-10-15 14:00:20.223059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.514 [2024-10-15 14:00:20.223074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:06.514 [2024-10-15 14:00:20.223081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.134 ms 00:26:06.514 [2024-10-15 14:00:20.223087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.080 [2024-10-15 14:00:20.648410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.080 [2024-10-15 14:00:20.648477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:07.080 [2024-10-15 14:00:20.648491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 424.525 ms 00:26:07.080 [2024-10-15 14:00:20.648499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.080 [2024-10-15 14:00:20.652479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.080 [2024-10-15 14:00:20.652514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:07.080 [2024-10-15 14:00:20.652524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.048 ms 00:26:07.080 [2024-10-15 14:00:20.652531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.080 [2024-10-15 14:00:20.652842] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:07.080 [2024-10-15 14:00:20.652871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.080 [2024-10-15 14:00:20.652879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:07.080 [2024-10-15 14:00:20.652887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.313 ms 00:26:07.080 [2024-10-15 14:00:20.652894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.080 [2024-10-15 14:00:20.652921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.080 [2024-10-15 14:00:20.652929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:07.080 [2024-10-15 14:00:20.652937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:07.080 [2024-10-15 14:00:20.652944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.080 [2024-10-15 14:00:20.652977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 430.043 ms, result 0 00:26:07.080 [2024-10-15 14:00:20.653015] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:07.080 [2024-10-15 14:00:20.653025] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:07.081 [2024-10-15 14:00:20.653034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.081 [2024-10-15 14:00:20.653042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:07.081 [2024-10-15 14:00:20.653049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 820.736 ms 00:26:07.081 [2024-10-15 14:00:20.653056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.081 [2024-10-15 14:00:20.653085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.081 [2024-10-15 14:00:20.653093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:07.081 [2024-10-15 14:00:20.653101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:07.081 [2024-10-15 14:00:20.653111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.081 [2024-10-15 14:00:20.663615] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:07.081 [2024-10-15 14:00:20.663735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.081 [2024-10-15 14:00:20.663745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:07.081 [2024-10-15 14:00:20.663754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.608 ms 00:26:07.081 [2024-10-15 14:00:20.663761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.081 [2024-10-15 14:00:20.664433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.081 [2024-10-15 14:00:20.664457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:07.081 [2024-10-15 14:00:20.664467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.609 ms 00:26:07.081 [2024-10-15 14:00:20.664476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.081 [2024-10-15 14:00:20.666707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.081 [2024-10-15 14:00:20.666728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:07.081 [2024-10-15 14:00:20.666738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.213 ms 00:26:07.081 [2024-10-15 14:00:20.666746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.081 [2024-10-15 14:00:20.666784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.081 [2024-10-15 14:00:20.666793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:07.081 [2024-10-15 14:00:20.666801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:07.081 [2024-10-15 14:00:20.666808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.081 [2024-10-15 14:00:20.666913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.081 [2024-10-15 14:00:20.666922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:07.081 [2024-10-15 14:00:20.666930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:07.081 [2024-10-15 14:00:20.666937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.081 [2024-10-15 14:00:20.666957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.081 [2024-10-15 14:00:20.666964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:07.081 [2024-10-15 14:00:20.666972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:07.081 [2024-10-15 14:00:20.666979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.081 [2024-10-15 14:00:20.667005] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:07.081 [2024-10-15 14:00:20.667014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.081 [2024-10-15 14:00:20.667023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:07.081 [2024-10-15 14:00:20.667030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:07.081 [2024-10-15 14:00:20.667037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.081 [2024-10-15 14:00:20.667088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.081 [2024-10-15 14:00:20.667096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:07.081 [2024-10-15 14:00:20.667104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:26:07.081 [2024-10-15 14:00:20.667110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.081 [2024-10-15 14:00:20.668060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1011.739 ms, result 0 00:26:07.081 [2024-10-15 14:00:20.680416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.081 [2024-10-15 14:00:20.696421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:07.081 [2024-10-15 14:00:20.704534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:07.647 Validate MD5 checksum, iteration 1 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:07.647 14:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:07.647 [2024-10-15 14:00:21.306927] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:26:07.647 [2024-10-15 14:00:21.307044] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79065 ] 00:26:07.906 [2024-10-15 14:00:21.456956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.906 [2024-10-15 14:00:21.557663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.281  [2024-10-15T14:00:23.635Z] Copying: 671/1024 [MB] (671 MBps) [2024-10-15T14:00:24.569Z] Copying: 1024/1024 [MB] (average 674 MBps) 00:26:10.781 00:26:10.781 14:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:10.781 14:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:13.309 14:00:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:13.309 Validate MD5 checksum, iteration 2 00:26:13.309 14:00:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dfa2aee09e5d74b663e6e69eecf50034 00:26:13.309 14:00:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dfa2aee09e5d74b663e6e69eecf50034 != \d\f\a\2\a\e\e\0\9\e\5\d\7\4\b\6\6\3\e\6\e\6\9\e\e\c\f\5\0\0\3\4 ]] 00:26:13.309 14:00:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:13.309 14:00:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:13.309 14:00:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:13.309 14:00:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:13.309 14:00:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:13.309 14:00:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:13.309 14:00:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:13.309 14:00:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:13.309 14:00:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:13.309 [2024-10-15 14:00:26.729102] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:26:13.309 [2024-10-15 14:00:26.729251] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79128 ] 00:26:13.309 [2024-10-15 14:00:26.878559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.309 [2024-10-15 14:00:26.964640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.684  [2024-10-15T14:00:29.037Z] Copying: 691/1024 [MB] (691 MBps) [2024-10-15T14:00:29.603Z] Copying: 1024/1024 [MB] (average 684 MBps) 00:26:15.815 00:26:15.815 14:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:15.815 14:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7b36a50b6e34b177a791aaee6d1cfac1 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7b36a50b6e34b177a791aaee6d1cfac1 != \7\b\3\6\a\5\0\b\6\e\3\4\b\1\7\7\a\7\9\1\a\a\e\e\6\d\1\c\f\a\c\1 ]] 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 79031 ]] 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 79031 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 79031 ']' 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 79031 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79031 00:26:18.343 killing process with pid 79031 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79031' 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 79031 00:26:18.343 14:00:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 79031 00:26:18.601 [2024-10-15 14:00:32.295261] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:18.601 [2024-10-15 14:00:32.305528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.601 [2024-10-15 14:00:32.305567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:18.601 [2024-10-15 14:00:32.305578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:18.601 [2024-10-15 14:00:32.305585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.601 [2024-10-15 14:00:32.305605] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:18.601 [2024-10-15 14:00:32.307754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.601 [2024-10-15 14:00:32.307778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:18.601 [2024-10-15 14:00:32.307786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.137 ms 00:26:18.601 [2024-10-15 14:00:32.307793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.601 [2024-10-15 14:00:32.308007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.601 [2024-10-15 14:00:32.308016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:18.601 [2024-10-15 14:00:32.308022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:26:18.601 [2024-10-15 14:00:32.308028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.601 [2024-10-15 14:00:32.308985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.601 [2024-10-15 14:00:32.309083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:18.601 [2024-10-15 14:00:32.309095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.944 ms 00:26:18.601 [2024-10-15 14:00:32.309101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.601 [2024-10-15 14:00:32.310042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.601 [2024-10-15 14:00:32.310061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:18.601 [2024-10-15 14:00:32.310069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.915 ms 00:26:18.601 [2024-10-15 14:00:32.310076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.601 [2024-10-15 14:00:32.317687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.601 [2024-10-15 14:00:32.317725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:18.601 [2024-10-15 14:00:32.317734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.581 ms 00:26:18.601 [2024-10-15 14:00:32.317740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.601 [2024-10-15 14:00:32.322108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.601 [2024-10-15 14:00:32.322136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:18.601 [2024-10-15 14:00:32.322145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.335 ms 00:26:18.601 [2024-10-15 14:00:32.322152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.601 [2024-10-15 14:00:32.322240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.601 [2024-10-15 14:00:32.322249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:18.601 [2024-10-15 14:00:32.322257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:26:18.601 [2024-10-15 14:00:32.322264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.601 [2024-10-15 14:00:32.329753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.602 [2024-10-15 14:00:32.329779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:18.602 [2024-10-15 14:00:32.329786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.475 ms 00:26:18.602 [2024-10-15 14:00:32.329792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.602 [2024-10-15 14:00:32.336960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.602 [2024-10-15 14:00:32.336988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:18.602 [2024-10-15 14:00:32.336995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.141 ms 00:26:18.602 [2024-10-15 14:00:32.337001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.602 [2024-10-15 14:00:32.344024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.602 [2024-10-15 14:00:32.344135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:18.602 [2024-10-15 14:00:32.344147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.995 ms 00:26:18.602 [2024-10-15 14:00:32.344153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.602 [2024-10-15 14:00:32.351393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.602 [2024-10-15 14:00:32.351420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:18.602 [2024-10-15 14:00:32.351428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.190 ms 00:26:18.602 [2024-10-15 14:00:32.351433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.602 [2024-10-15 14:00:32.351460] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:18.602 [2024-10-15 14:00:32.351471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:18.602 [2024-10-15 14:00:32.351484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:18.602 [2024-10-15 14:00:32.351491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:18.602 [2024-10-15 14:00:32.351498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:18.602 [2024-10-15 14:00:32.351592] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:18.602 [2024-10-15 14:00:32.351598] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: fc31afad-a5b8-407a-8be2-9affb9279993 00:26:18.602 [2024-10-15 14:00:32.351604] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:18.602 [2024-10-15 14:00:32.351610] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:18.602 [2024-10-15 14:00:32.351615] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:18.602 [2024-10-15 14:00:32.351629] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:18.602 [2024-10-15 14:00:32.351634] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:18.602 [2024-10-15 14:00:32.351640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:18.602 [2024-10-15 14:00:32.351646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:18.602 [2024-10-15 14:00:32.351651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:18.602 [2024-10-15 14:00:32.351656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:18.602 [2024-10-15 14:00:32.351664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.602 [2024-10-15 14:00:32.351671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:18.602 [2024-10-15 14:00:32.351678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:26:18.602 [2024-10-15 14:00:32.351686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.602 [2024-10-15 14:00:32.361569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.602 [2024-10-15 14:00:32.361597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:18.602 [2024-10-15 14:00:32.361605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.859 ms 00:26:18.602 [2024-10-15 14:00:32.361613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.602 [2024-10-15 14:00:32.361893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:18.602 [2024-10-15 14:00:32.361909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:18.602 [2024-10-15 14:00:32.361916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:26:18.602 [2024-10-15 14:00:32.361922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.860 [2024-10-15 14:00:32.395451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:18.860 [2024-10-15 14:00:32.395492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:18.860 [2024-10-15 14:00:32.395502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:18.861 [2024-10-15 14:00:32.395509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.861 [2024-10-15 14:00:32.395541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:18.861 [2024-10-15 14:00:32.395552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:18.861 [2024-10-15 14:00:32.395558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:18.861 [2024-10-15 14:00:32.395564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.861 [2024-10-15 14:00:32.395654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:18.861 [2024-10-15 14:00:32.395663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:18.861 [2024-10-15 14:00:32.395669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:18.861 [2024-10-15 14:00:32.395675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.861 [2024-10-15 14:00:32.395690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:18.861 [2024-10-15 14:00:32.395697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:18.861 [2024-10-15 14:00:32.395706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:18.861 [2024-10-15 14:00:32.395712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.861 [2024-10-15 14:00:32.457754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:18.861 [2024-10-15 14:00:32.457800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:18.861 [2024-10-15 14:00:32.457809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:18.861 [2024-10-15 14:00:32.457815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.861 [2024-10-15 14:00:32.508127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:18.861 [2024-10-15 14:00:32.508172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:18.861 [2024-10-15 14:00:32.508185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:18.861 [2024-10-15 14:00:32.508192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.861 [2024-10-15 14:00:32.509172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:18.861 [2024-10-15 14:00:32.509200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:18.861 [2024-10-15 14:00:32.509208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:18.861 [2024-10-15 14:00:32.509214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.861 [2024-10-15 14:00:32.509280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:18.861 [2024-10-15 14:00:32.509289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:18.861 [2024-10-15 14:00:32.509295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:18.861 [2024-10-15 14:00:32.509310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.861 [2024-10-15 14:00:32.509391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:18.861 [2024-10-15 14:00:32.509401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:18.861 [2024-10-15 14:00:32.509407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:18.861 [2024-10-15 14:00:32.509412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.861 [2024-10-15 14:00:32.509437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:18.861 [2024-10-15 14:00:32.509444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:18.861 [2024-10-15 14:00:32.509450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:18.861 [2024-10-15 14:00:32.509456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.861 [2024-10-15 14:00:32.509485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:18.861 [2024-10-15 14:00:32.509492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:18.861 [2024-10-15 14:00:32.509498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:18.861 [2024-10-15 14:00:32.509504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.861 [2024-10-15 14:00:32.509537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:18.861 [2024-10-15 14:00:32.509545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:18.861 [2024-10-15 14:00:32.509551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:18.861 [2024-10-15 14:00:32.509558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:18.861 [2024-10-15 14:00:32.509652] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 204.101 ms, result 0 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:19.426 Remove shared memory files 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78805 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:19.426 ************************************ 00:26:19.426 END TEST ftl_upgrade_shutdown 00:26:19.426 ************************************ 00:26:19.426 00:26:19.426 real 1m17.768s 00:26:19.426 user 1m48.987s 00:26:19.426 sys 0m18.105s 00:26:19.426 14:00:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:19.427 14:00:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:19.685 14:00:33 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:26:19.685 14:00:33 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:19.685 14:00:33 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:26:19.685 14:00:33 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:19.685 14:00:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:19.685 ************************************ 00:26:19.685 START TEST ftl_restore_fast 00:26:19.685 ************************************ 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:19.685 * Looking for test storage... 00:26:19.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lcov --version 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:19.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.685 --rc genhtml_branch_coverage=1 00:26:19.685 --rc genhtml_function_coverage=1 00:26:19.685 --rc genhtml_legend=1 00:26:19.685 --rc geninfo_all_blocks=1 00:26:19.685 --rc geninfo_unexecuted_blocks=1 00:26:19.685 00:26:19.685 ' 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:19.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.685 --rc genhtml_branch_coverage=1 00:26:19.685 --rc genhtml_function_coverage=1 00:26:19.685 --rc genhtml_legend=1 00:26:19.685 --rc geninfo_all_blocks=1 00:26:19.685 --rc geninfo_unexecuted_blocks=1 00:26:19.685 00:26:19.685 ' 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:19.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.685 --rc genhtml_branch_coverage=1 00:26:19.685 --rc genhtml_function_coverage=1 00:26:19.685 --rc genhtml_legend=1 00:26:19.685 --rc geninfo_all_blocks=1 00:26:19.685 --rc geninfo_unexecuted_blocks=1 00:26:19.685 00:26:19.685 ' 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:19.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.685 --rc genhtml_branch_coverage=1 00:26:19.685 --rc genhtml_function_coverage=1 00:26:19.685 --rc genhtml_legend=1 00:26:19.685 --rc geninfo_all_blocks=1 00:26:19.685 --rc geninfo_unexecuted_blocks=1 00:26:19.685 00:26:19.685 ' 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.h6AYTohEnY 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:19.685 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=79272 00:26:19.686 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 79272 00:26:19.686 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 79272 ']' 00:26:19.686 14:00:33 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:19.686 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.686 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:19.686 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.686 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:19.686 14:00:33 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:26:19.686 [2024-10-15 14:00:33.451388] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:26:19.686 [2024-10-15 14:00:33.451630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79272 ] 00:26:19.943 [2024-10-15 14:00:33.596437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.943 [2024-10-15 14:00:33.697682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.506 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:20.506 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:26:20.506 14:00:34 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:20.506 14:00:34 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:26:20.506 14:00:34 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:20.506 14:00:34 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:26:20.506 14:00:34 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:26:20.506 14:00:34 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:20.765 14:00:34 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:20.765 14:00:34 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:26:21.023 14:00:34 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:21.023 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:26:21.023 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:21.023 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:21.023 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:21.023 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:21.023 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:21.023 { 00:26:21.023 "name": "nvme0n1", 00:26:21.023 "aliases": [ 00:26:21.023 "4b14e39c-97a4-4dfc-a048-3bbdb88fd241" 00:26:21.023 ], 00:26:21.023 "product_name": "NVMe disk", 00:26:21.023 "block_size": 4096, 00:26:21.023 "num_blocks": 1310720, 00:26:21.023 "uuid": "4b14e39c-97a4-4dfc-a048-3bbdb88fd241", 00:26:21.023 "numa_id": -1, 00:26:21.023 "assigned_rate_limits": { 00:26:21.023 "rw_ios_per_sec": 0, 00:26:21.023 "rw_mbytes_per_sec": 0, 00:26:21.023 "r_mbytes_per_sec": 0, 00:26:21.023 "w_mbytes_per_sec": 0 00:26:21.023 }, 00:26:21.023 "claimed": true, 00:26:21.023 "claim_type": "read_many_write_one", 00:26:21.023 "zoned": false, 00:26:21.023 "supported_io_types": { 00:26:21.023 "read": true, 00:26:21.023 "write": true, 00:26:21.023 "unmap": true, 00:26:21.023 "flush": true, 00:26:21.023 "reset": true, 00:26:21.023 "nvme_admin": true, 00:26:21.023 "nvme_io": true, 00:26:21.023 "nvme_io_md": false, 00:26:21.023 "write_zeroes": true, 00:26:21.023 "zcopy": false, 00:26:21.023 "get_zone_info": false, 00:26:21.023 "zone_management": false, 00:26:21.023 "zone_append": false, 00:26:21.023 "compare": true, 00:26:21.023 "compare_and_write": false, 00:26:21.023 "abort": true, 00:26:21.023 "seek_hole": false, 00:26:21.023 "seek_data": false, 00:26:21.023 "copy": true, 00:26:21.023 "nvme_iov_md": false 00:26:21.023 }, 00:26:21.023 "driver_specific": { 00:26:21.023 "nvme": [ 00:26:21.023 { 00:26:21.023 "pci_address": "0000:00:11.0", 00:26:21.023 "trid": { 00:26:21.023 "trtype": "PCIe", 00:26:21.023 "traddr": "0000:00:11.0" 00:26:21.023 }, 00:26:21.023 "ctrlr_data": { 00:26:21.023 "cntlid": 0, 00:26:21.023 "vendor_id": "0x1b36", 00:26:21.023 "model_number": "QEMU NVMe Ctrl", 00:26:21.023 "serial_number": "12341", 00:26:21.023 "firmware_revision": "8.0.0", 00:26:21.023 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:21.023 "oacs": { 00:26:21.023 "security": 0, 00:26:21.023 "format": 1, 00:26:21.023 "firmware": 0, 00:26:21.023 "ns_manage": 1 00:26:21.023 }, 00:26:21.023 "multi_ctrlr": false, 00:26:21.023 "ana_reporting": false 00:26:21.023 }, 00:26:21.023 "vs": { 00:26:21.023 "nvme_version": "1.4" 00:26:21.023 }, 00:26:21.023 "ns_data": { 00:26:21.023 "id": 1, 00:26:21.023 "can_share": false 00:26:21.023 } 00:26:21.023 } 00:26:21.023 ], 00:26:21.023 "mp_policy": "active_passive" 00:26:21.023 } 00:26:21.023 } 00:26:21.023 ]' 00:26:21.023 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:21.023 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:21.023 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:21.281 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:21.281 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:21.281 14:00:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:26:21.281 14:00:34 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:26:21.281 14:00:34 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:21.281 14:00:34 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:26:21.281 14:00:34 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:21.281 14:00:34 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:21.281 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=1665019c-a740-4c79-a073-d4a025337feb 00:26:21.281 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:26:21.281 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1665019c-a740-4c79-a073-d4a025337feb 00:26:21.539 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:21.797 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=f562e068-f494-4e5e-a957-5fd14d3eaf0f 00:26:21.797 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f562e068-f494-4e5e-a957-5fd14d3eaf0f 00:26:22.056 14:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=85c2b93a-91d3-4f91-9e93-95e7df4b7bbc 00:26:22.056 14:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:22.056 14:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 85c2b93a-91d3-4f91-9e93-95e7df4b7bbc 00:26:22.056 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:26:22.056 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:22.056 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=85c2b93a-91d3-4f91-9e93-95e7df4b7bbc 00:26:22.056 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:26:22.056 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 85c2b93a-91d3-4f91-9e93-95e7df4b7bbc 00:26:22.056 14:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=85c2b93a-91d3-4f91-9e93-95e7df4b7bbc 00:26:22.056 14:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:22.056 14:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:22.056 14:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:22.056 14:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 85c2b93a-91d3-4f91-9e93-95e7df4b7bbc 00:26:22.314 14:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:22.314 { 00:26:22.314 "name": "85c2b93a-91d3-4f91-9e93-95e7df4b7bbc", 00:26:22.314 "aliases": [ 00:26:22.314 "lvs/nvme0n1p0" 00:26:22.314 ], 00:26:22.314 "product_name": "Logical Volume", 00:26:22.314 "block_size": 4096, 00:26:22.314 "num_blocks": 26476544, 00:26:22.314 "uuid": "85c2b93a-91d3-4f91-9e93-95e7df4b7bbc", 00:26:22.314 "assigned_rate_limits": { 00:26:22.314 "rw_ios_per_sec": 0, 00:26:22.314 "rw_mbytes_per_sec": 0, 00:26:22.314 "r_mbytes_per_sec": 0, 00:26:22.314 "w_mbytes_per_sec": 0 00:26:22.314 }, 00:26:22.314 "claimed": false, 00:26:22.314 "zoned": false, 00:26:22.314 "supported_io_types": { 00:26:22.314 "read": true, 00:26:22.314 "write": true, 00:26:22.314 "unmap": true, 00:26:22.314 "flush": false, 00:26:22.314 "reset": true, 00:26:22.314 "nvme_admin": false, 00:26:22.314 "nvme_io": false, 00:26:22.314 "nvme_io_md": false, 00:26:22.314 "write_zeroes": true, 00:26:22.314 "zcopy": false, 00:26:22.314 "get_zone_info": false, 00:26:22.314 "zone_management": false, 00:26:22.314 "zone_append": false, 00:26:22.314 "compare": false, 00:26:22.314 "compare_and_write": false, 00:26:22.314 "abort": false, 00:26:22.314 "seek_hole": true, 00:26:22.314 "seek_data": true, 00:26:22.314 "copy": false, 00:26:22.314 "nvme_iov_md": false 00:26:22.314 }, 00:26:22.314 "driver_specific": { 00:26:22.314 "lvol": { 00:26:22.314 "lvol_store_uuid": "f562e068-f494-4e5e-a957-5fd14d3eaf0f", 00:26:22.314 "base_bdev": "nvme0n1", 00:26:22.314 "thin_provision": true, 00:26:22.314 "num_allocated_clusters": 0, 00:26:22.314 "snapshot": false, 00:26:22.314 "clone": false, 00:26:22.314 "esnap_clone": false 00:26:22.314 } 00:26:22.314 } 00:26:22.314 } 00:26:22.314 ]' 00:26:22.314 14:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:22.314 14:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:22.314 14:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:22.314 14:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:22.314 14:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:22.314 14:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:22.314 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:26:22.314 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:26:22.314 14:00:35 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:22.572 14:00:36 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:22.572 14:00:36 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:22.572 14:00:36 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 85c2b93a-91d3-4f91-9e93-95e7df4b7bbc 00:26:22.572 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=85c2b93a-91d3-4f91-9e93-95e7df4b7bbc 00:26:22.572 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:22.572 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:22.572 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:22.572 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 85c2b93a-91d3-4f91-9e93-95e7df4b7bbc 00:26:22.830 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:22.830 { 00:26:22.830 "name": "85c2b93a-91d3-4f91-9e93-95e7df4b7bbc", 00:26:22.830 "aliases": [ 00:26:22.830 "lvs/nvme0n1p0" 00:26:22.830 ], 00:26:22.830 "product_name": "Logical Volume", 00:26:22.830 "block_size": 4096, 00:26:22.830 "num_blocks": 26476544, 00:26:22.830 "uuid": "85c2b93a-91d3-4f91-9e93-95e7df4b7bbc", 00:26:22.830 "assigned_rate_limits": { 00:26:22.830 "rw_ios_per_sec": 0, 00:26:22.830 "rw_mbytes_per_sec": 0, 00:26:22.830 "r_mbytes_per_sec": 0, 00:26:22.830 "w_mbytes_per_sec": 0 00:26:22.830 }, 00:26:22.830 "claimed": false, 00:26:22.830 "zoned": false, 00:26:22.830 "supported_io_types": { 00:26:22.830 "read": true, 00:26:22.830 "write": true, 00:26:22.830 "unmap": true, 00:26:22.830 "flush": false, 00:26:22.830 "reset": true, 00:26:22.830 "nvme_admin": false, 00:26:22.830 "nvme_io": false, 00:26:22.830 "nvme_io_md": false, 00:26:22.830 "write_zeroes": true, 00:26:22.830 "zcopy": false, 00:26:22.830 "get_zone_info": false, 00:26:22.830 "zone_management": false, 00:26:22.830 "zone_append": false, 00:26:22.830 "compare": false, 00:26:22.830 "compare_and_write": false, 00:26:22.830 "abort": false, 00:26:22.830 "seek_hole": true, 00:26:22.830 "seek_data": true, 00:26:22.830 "copy": false, 00:26:22.830 "nvme_iov_md": false 00:26:22.830 }, 00:26:22.830 "driver_specific": { 00:26:22.830 "lvol": { 00:26:22.830 "lvol_store_uuid": "f562e068-f494-4e5e-a957-5fd14d3eaf0f", 00:26:22.830 "base_bdev": "nvme0n1", 00:26:22.830 "thin_provision": true, 00:26:22.830 "num_allocated_clusters": 0, 00:26:22.830 "snapshot": false, 00:26:22.830 "clone": false, 00:26:22.830 "esnap_clone": false 00:26:22.830 } 00:26:22.830 } 00:26:22.830 } 00:26:22.830 ]' 00:26:22.830 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:22.830 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:22.830 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:22.830 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:22.830 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:22.830 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:22.830 14:00:36 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:26:22.830 14:00:36 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:23.089 14:00:36 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:23.089 14:00:36 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 85c2b93a-91d3-4f91-9e93-95e7df4b7bbc 00:26:23.089 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=85c2b93a-91d3-4f91-9e93-95e7df4b7bbc 00:26:23.089 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:23.089 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:23.089 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:23.089 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 85c2b93a-91d3-4f91-9e93-95e7df4b7bbc 00:26:23.089 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:23.089 { 00:26:23.089 "name": "85c2b93a-91d3-4f91-9e93-95e7df4b7bbc", 00:26:23.089 "aliases": [ 00:26:23.089 "lvs/nvme0n1p0" 00:26:23.089 ], 00:26:23.089 "product_name": "Logical Volume", 00:26:23.089 "block_size": 4096, 00:26:23.089 "num_blocks": 26476544, 00:26:23.089 "uuid": "85c2b93a-91d3-4f91-9e93-95e7df4b7bbc", 00:26:23.089 "assigned_rate_limits": { 00:26:23.089 "rw_ios_per_sec": 0, 00:26:23.089 "rw_mbytes_per_sec": 0, 00:26:23.089 "r_mbytes_per_sec": 0, 00:26:23.089 "w_mbytes_per_sec": 0 00:26:23.089 }, 00:26:23.089 "claimed": false, 00:26:23.089 "zoned": false, 00:26:23.089 "supported_io_types": { 00:26:23.089 "read": true, 00:26:23.089 "write": true, 00:26:23.089 "unmap": true, 00:26:23.089 "flush": false, 00:26:23.089 "reset": true, 00:26:23.089 "nvme_admin": false, 00:26:23.089 "nvme_io": false, 00:26:23.089 "nvme_io_md": false, 00:26:23.089 "write_zeroes": true, 00:26:23.089 "zcopy": false, 00:26:23.089 "get_zone_info": false, 00:26:23.089 "zone_management": false, 00:26:23.089 "zone_append": false, 00:26:23.089 "compare": false, 00:26:23.089 "compare_and_write": false, 00:26:23.089 "abort": false, 00:26:23.089 "seek_hole": true, 00:26:23.089 "seek_data": true, 00:26:23.089 "copy": false, 00:26:23.089 "nvme_iov_md": false 00:26:23.089 }, 00:26:23.089 "driver_specific": { 00:26:23.089 "lvol": { 00:26:23.089 "lvol_store_uuid": "f562e068-f494-4e5e-a957-5fd14d3eaf0f", 00:26:23.089 "base_bdev": "nvme0n1", 00:26:23.089 "thin_provision": true, 00:26:23.089 "num_allocated_clusters": 0, 00:26:23.089 "snapshot": false, 00:26:23.089 "clone": false, 00:26:23.089 "esnap_clone": false 00:26:23.089 } 00:26:23.089 } 00:26:23.089 } 00:26:23.089 ]' 00:26:23.089 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:23.347 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:23.347 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:23.347 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:23.347 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:23.347 14:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:23.347 14:00:36 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:23.347 14:00:36 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 85c2b93a-91d3-4f91-9e93-95e7df4b7bbc --l2p_dram_limit 10' 00:26:23.347 14:00:36 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:23.347 14:00:36 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:23.347 14:00:36 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:23.347 14:00:36 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:26:23.347 14:00:36 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:26:23.347 14:00:36 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 85c2b93a-91d3-4f91-9e93-95e7df4b7bbc --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:26:23.347 [2024-10-15 14:00:37.132772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.347 [2024-10-15 14:00:37.132822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:23.347 [2024-10-15 14:00:37.132836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:23.347 [2024-10-15 14:00:37.132842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.347 [2024-10-15 14:00:37.132892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.347 [2024-10-15 14:00:37.132902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:23.347 [2024-10-15 14:00:37.132910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:23.347 [2024-10-15 14:00:37.132916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.347 [2024-10-15 14:00:37.132936] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:23.347 [2024-10-15 14:00:37.133616] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:23.347 [2024-10-15 14:00:37.133638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.347 [2024-10-15 14:00:37.133645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:23.347 [2024-10-15 14:00:37.133653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:26:23.347 [2024-10-15 14:00:37.133659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.347 [2024-10-15 14:00:37.133765] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9a01225a-bd24-4d03-b10f-ba453d044a70 00:26:23.607 [2024-10-15 14:00:37.134744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.607 [2024-10-15 14:00:37.134776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:23.607 [2024-10-15 14:00:37.134784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:23.607 [2024-10-15 14:00:37.134793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.607 [2024-10-15 14:00:37.139894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.607 [2024-10-15 14:00:37.140039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:23.607 [2024-10-15 14:00:37.140052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.067 ms 00:26:23.607 [2024-10-15 14:00:37.140059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.607 [2024-10-15 14:00:37.140134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.607 [2024-10-15 14:00:37.140143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:23.607 [2024-10-15 14:00:37.140150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:23.607 [2024-10-15 14:00:37.140159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.607 [2024-10-15 14:00:37.140204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.607 [2024-10-15 14:00:37.140214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:23.607 [2024-10-15 14:00:37.140236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:23.607 [2024-10-15 14:00:37.140245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.607 [2024-10-15 14:00:37.140264] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:23.607 [2024-10-15 14:00:37.143200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.607 [2024-10-15 14:00:37.143305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:23.607 [2024-10-15 14:00:37.143320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.939 ms 00:26:23.607 [2024-10-15 14:00:37.143329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.607 [2024-10-15 14:00:37.143360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.607 [2024-10-15 14:00:37.143366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:23.607 [2024-10-15 14:00:37.143374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:23.607 [2024-10-15 14:00:37.143379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.607 [2024-10-15 14:00:37.143402] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:23.607 [2024-10-15 14:00:37.143513] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:23.607 [2024-10-15 14:00:37.143525] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:23.607 [2024-10-15 14:00:37.143534] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:23.607 [2024-10-15 14:00:37.143543] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:23.607 [2024-10-15 14:00:37.143550] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:23.607 [2024-10-15 14:00:37.143558] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:23.607 [2024-10-15 14:00:37.143564] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:23.607 [2024-10-15 14:00:37.143570] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:23.607 [2024-10-15 14:00:37.143577] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:23.607 [2024-10-15 14:00:37.143584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.607 [2024-10-15 14:00:37.143591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:23.607 [2024-10-15 14:00:37.143607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:26:23.607 [2024-10-15 14:00:37.143617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.607 [2024-10-15 14:00:37.143686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.607 [2024-10-15 14:00:37.143692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:23.607 [2024-10-15 14:00:37.143700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:23.607 [2024-10-15 14:00:37.143705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.607 [2024-10-15 14:00:37.143783] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:23.607 [2024-10-15 14:00:37.143790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:23.607 [2024-10-15 14:00:37.143800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:23.607 [2024-10-15 14:00:37.143806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.607 [2024-10-15 14:00:37.143813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:23.607 [2024-10-15 14:00:37.143818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:23.607 [2024-10-15 14:00:37.143826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:23.607 [2024-10-15 14:00:37.143831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:23.607 [2024-10-15 14:00:37.143838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:23.607 [2024-10-15 14:00:37.143843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:23.607 [2024-10-15 14:00:37.143849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:23.607 [2024-10-15 14:00:37.143854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:23.607 [2024-10-15 14:00:37.143861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:23.607 [2024-10-15 14:00:37.143867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:23.607 [2024-10-15 14:00:37.143876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:23.607 [2024-10-15 14:00:37.143881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.607 [2024-10-15 14:00:37.143889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:23.607 [2024-10-15 14:00:37.143894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:23.607 [2024-10-15 14:00:37.143900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.607 [2024-10-15 14:00:37.143906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:23.607 [2024-10-15 14:00:37.143913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:23.607 [2024-10-15 14:00:37.143918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.607 [2024-10-15 14:00:37.143925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:23.607 [2024-10-15 14:00:37.143930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:23.607 [2024-10-15 14:00:37.143937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.607 [2024-10-15 14:00:37.143942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:23.607 [2024-10-15 14:00:37.143948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:23.607 [2024-10-15 14:00:37.143953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.607 [2024-10-15 14:00:37.143959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:23.607 [2024-10-15 14:00:37.143964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:23.607 [2024-10-15 14:00:37.143970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.607 [2024-10-15 14:00:37.143975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:23.607 [2024-10-15 14:00:37.143983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:23.607 [2024-10-15 14:00:37.143988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:23.607 [2024-10-15 14:00:37.143995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:23.607 [2024-10-15 14:00:37.144000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:23.607 [2024-10-15 14:00:37.144006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:23.607 [2024-10-15 14:00:37.144011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:23.607 [2024-10-15 14:00:37.144017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:23.607 [2024-10-15 14:00:37.144022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.607 [2024-10-15 14:00:37.144028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:23.607 [2024-10-15 14:00:37.144033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:23.607 [2024-10-15 14:00:37.144039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.607 [2024-10-15 14:00:37.144043] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:23.607 [2024-10-15 14:00:37.144051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:23.607 [2024-10-15 14:00:37.144056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:23.607 [2024-10-15 14:00:37.144063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.607 [2024-10-15 14:00:37.144070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:23.608 [2024-10-15 14:00:37.144079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:23.608 [2024-10-15 14:00:37.144084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:23.608 [2024-10-15 14:00:37.144091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:23.608 [2024-10-15 14:00:37.144096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:23.608 [2024-10-15 14:00:37.144102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:23.608 [2024-10-15 14:00:37.144110] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:23.608 [2024-10-15 14:00:37.144119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.608 [2024-10-15 14:00:37.144126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:23.608 [2024-10-15 14:00:37.144133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:23.608 [2024-10-15 14:00:37.144139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:23.608 [2024-10-15 14:00:37.144145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:23.608 [2024-10-15 14:00:37.144151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:23.608 [2024-10-15 14:00:37.144158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:23.608 [2024-10-15 14:00:37.144164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:23.608 [2024-10-15 14:00:37.144171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:23.608 [2024-10-15 14:00:37.144176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:23.608 [2024-10-15 14:00:37.144184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:23.608 [2024-10-15 14:00:37.144189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:23.608 [2024-10-15 14:00:37.144196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:23.608 [2024-10-15 14:00:37.144202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:23.608 [2024-10-15 14:00:37.144209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:23.608 [2024-10-15 14:00:37.144214] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:23.608 [2024-10-15 14:00:37.144234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.608 [2024-10-15 14:00:37.144243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:23.608 [2024-10-15 14:00:37.144250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:23.608 [2024-10-15 14:00:37.144256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:23.608 [2024-10-15 14:00:37.144264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:23.608 [2024-10-15 14:00:37.144270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.608 [2024-10-15 14:00:37.144277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:23.608 [2024-10-15 14:00:37.144283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:26:23.608 [2024-10-15 14:00:37.144292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.608 [2024-10-15 14:00:37.144336] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:23.608 [2024-10-15 14:00:37.144347] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:26.136 [2024-10-15 14:00:39.466274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.136 [2024-10-15 14:00:39.466496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:26.136 [2024-10-15 14:00:39.466517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2321.927 ms 00:26:26.136 [2024-10-15 14:00:39.466528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.136 [2024-10-15 14:00:39.492207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.136 [2024-10-15 14:00:39.492267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:26.136 [2024-10-15 14:00:39.492291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.456 ms 00:26:26.136 [2024-10-15 14:00:39.492301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.136 [2024-10-15 14:00:39.492441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.136 [2024-10-15 14:00:39.492454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:26.136 [2024-10-15 14:00:39.492462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:26.136 [2024-10-15 14:00:39.492473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.136 [2024-10-15 14:00:39.522963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.136 [2024-10-15 14:00:39.523007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:26.136 [2024-10-15 14:00:39.523019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.456 ms 00:26:26.136 [2024-10-15 14:00:39.523029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.136 [2024-10-15 14:00:39.523063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.136 [2024-10-15 14:00:39.523075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:26.136 [2024-10-15 14:00:39.523083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:26.136 [2024-10-15 14:00:39.523093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.136 [2024-10-15 14:00:39.523476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.136 [2024-10-15 14:00:39.523494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:26.136 [2024-10-15 14:00:39.523503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:26:26.136 [2024-10-15 14:00:39.523512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.136 [2024-10-15 14:00:39.523635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.136 [2024-10-15 14:00:39.523646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:26.136 [2024-10-15 14:00:39.523654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:26:26.136 [2024-10-15 14:00:39.523664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.136 [2024-10-15 14:00:39.537588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.136 [2024-10-15 14:00:39.537739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:26.136 [2024-10-15 14:00:39.537755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.904 ms 00:26:26.136 [2024-10-15 14:00:39.537767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.136 [2024-10-15 14:00:39.549306] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:26.136 [2024-10-15 14:00:39.552012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.136 [2024-10-15 14:00:39.552042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:26.136 [2024-10-15 14:00:39.552055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.152 ms 00:26:26.136 [2024-10-15 14:00:39.552064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.137 [2024-10-15 14:00:39.628401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.137 [2024-10-15 14:00:39.628460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:26.137 [2024-10-15 14:00:39.628477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.305 ms 00:26:26.137 [2024-10-15 14:00:39.628486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.137 [2024-10-15 14:00:39.628685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.137 [2024-10-15 14:00:39.628696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:26.137 [2024-10-15 14:00:39.628707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:26:26.137 [2024-10-15 14:00:39.628717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.137 [2024-10-15 14:00:39.651965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.137 [2024-10-15 14:00:39.652007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:26.137 [2024-10-15 14:00:39.652022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.189 ms 00:26:26.137 [2024-10-15 14:00:39.652031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.137 [2024-10-15 14:00:39.674343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.137 [2024-10-15 14:00:39.674381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:26.137 [2024-10-15 14:00:39.674395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.268 ms 00:26:26.137 [2024-10-15 14:00:39.674403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.137 [2024-10-15 14:00:39.674965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.137 [2024-10-15 14:00:39.674986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:26.137 [2024-10-15 14:00:39.674997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:26:26.137 [2024-10-15 14:00:39.675004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.137 [2024-10-15 14:00:39.741696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.137 [2024-10-15 14:00:39.741892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:26.137 [2024-10-15 14:00:39.741918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.652 ms 00:26:26.137 [2024-10-15 14:00:39.741926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.137 [2024-10-15 14:00:39.765997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.137 [2024-10-15 14:00:39.766038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:26.137 [2024-10-15 14:00:39.766055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.997 ms 00:26:26.137 [2024-10-15 14:00:39.766063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.137 [2024-10-15 14:00:39.790328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.137 [2024-10-15 14:00:39.790366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:26.137 [2024-10-15 14:00:39.790379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.222 ms 00:26:26.137 [2024-10-15 14:00:39.790388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.137 [2024-10-15 14:00:39.813172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.137 [2024-10-15 14:00:39.813339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:26.137 [2024-10-15 14:00:39.813360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.744 ms 00:26:26.137 [2024-10-15 14:00:39.813369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.137 [2024-10-15 14:00:39.813409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.137 [2024-10-15 14:00:39.813419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:26.137 [2024-10-15 14:00:39.813431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:26.137 [2024-10-15 14:00:39.813439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.137 [2024-10-15 14:00:39.813518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.137 [2024-10-15 14:00:39.813527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:26.137 [2024-10-15 14:00:39.813537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:26.137 [2024-10-15 14:00:39.813545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.137 [2024-10-15 14:00:39.814416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2681.216 ms, result 0 00:26:26.137 { 00:26:26.137 "name": "ftl0", 00:26:26.137 "uuid": "9a01225a-bd24-4d03-b10f-ba453d044a70" 00:26:26.137 } 00:26:26.137 14:00:39 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:26.137 14:00:39 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:26.396 14:00:40 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:26:26.396 14:00:40 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:26.655 [2024-10-15 14:00:40.218027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.655 [2024-10-15 14:00:40.218084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:26.655 [2024-10-15 14:00:40.218097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:26.655 [2024-10-15 14:00:40.218114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.655 [2024-10-15 14:00:40.218138] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:26.655 [2024-10-15 14:00:40.220759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.655 [2024-10-15 14:00:40.220804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:26.655 [2024-10-15 14:00:40.220819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.602 ms 00:26:26.655 [2024-10-15 14:00:40.220828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.655 [2024-10-15 14:00:40.221088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.655 [2024-10-15 14:00:40.221102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:26.655 [2024-10-15 14:00:40.221112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:26:26.655 [2024-10-15 14:00:40.221119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.655 [2024-10-15 14:00:40.224381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.655 [2024-10-15 14:00:40.224403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:26.655 [2024-10-15 14:00:40.224413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.242 ms 00:26:26.655 [2024-10-15 14:00:40.224421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.655 [2024-10-15 14:00:40.230629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.655 [2024-10-15 14:00:40.230656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:26.655 [2024-10-15 14:00:40.230667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:26:26.655 [2024-10-15 14:00:40.230675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.655 [2024-10-15 14:00:40.253959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.655 [2024-10-15 14:00:40.254113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:26.655 [2024-10-15 14:00:40.254134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.211 ms 00:26:26.655 [2024-10-15 14:00:40.254142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.655 [2024-10-15 14:00:40.268872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.655 [2024-10-15 14:00:40.269007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:26.655 [2024-10-15 14:00:40.269030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.687 ms 00:26:26.655 [2024-10-15 14:00:40.269038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.655 [2024-10-15 14:00:40.269186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.655 [2024-10-15 14:00:40.269197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:26.655 [2024-10-15 14:00:40.269208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:26:26.655 [2024-10-15 14:00:40.269215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.655 [2024-10-15 14:00:40.292160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.655 [2024-10-15 14:00:40.292300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:26.655 [2024-10-15 14:00:40.292320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.905 ms 00:26:26.655 [2024-10-15 14:00:40.292327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.655 [2024-10-15 14:00:40.314621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.655 [2024-10-15 14:00:40.314658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:26.655 [2024-10-15 14:00:40.314672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.254 ms 00:26:26.655 [2024-10-15 14:00:40.314680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.655 [2024-10-15 14:00:40.337113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.655 [2024-10-15 14:00:40.337155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:26.655 [2024-10-15 14:00:40.337169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.386 ms 00:26:26.655 [2024-10-15 14:00:40.337177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.655 [2024-10-15 14:00:40.359361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.655 [2024-10-15 14:00:40.359518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:26.655 [2024-10-15 14:00:40.359537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.080 ms 00:26:26.655 [2024-10-15 14:00:40.359544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.655 [2024-10-15 14:00:40.359589] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:26.655 [2024-10-15 14:00:40.359604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.359994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:26.655 [2024-10-15 14:00:40.360186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:26.656 [2024-10-15 14:00:40.360486] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:26.656 [2024-10-15 14:00:40.360496] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9a01225a-bd24-4d03-b10f-ba453d044a70 00:26:26.656 [2024-10-15 14:00:40.360503] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:26.656 [2024-10-15 14:00:40.360514] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:26.656 [2024-10-15 14:00:40.360523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:26.656 [2024-10-15 14:00:40.360532] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:26.656 [2024-10-15 14:00:40.360539] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:26.656 [2024-10-15 14:00:40.360550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:26.656 [2024-10-15 14:00:40.360557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:26.656 [2024-10-15 14:00:40.360564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:26.656 [2024-10-15 14:00:40.360571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:26.656 [2024-10-15 14:00:40.360579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.656 [2024-10-15 14:00:40.360587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:26.656 [2024-10-15 14:00:40.360596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:26:26.656 [2024-10-15 14:00:40.360603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.656 [2024-10-15 14:00:40.373019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.656 [2024-10-15 14:00:40.373056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:26.656 [2024-10-15 14:00:40.373069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.365 ms 00:26:26.656 [2024-10-15 14:00:40.373077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.656 [2024-10-15 14:00:40.373461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.656 [2024-10-15 14:00:40.373471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:26.656 [2024-10-15 14:00:40.373481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:26:26.656 [2024-10-15 14:00:40.373489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.656 [2024-10-15 14:00:40.414747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.656 [2024-10-15 14:00:40.414959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:26.656 [2024-10-15 14:00:40.414980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.656 [2024-10-15 14:00:40.414988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.656 [2024-10-15 14:00:40.415059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.656 [2024-10-15 14:00:40.415067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:26.656 [2024-10-15 14:00:40.415077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.656 [2024-10-15 14:00:40.415085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.656 [2024-10-15 14:00:40.415171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.656 [2024-10-15 14:00:40.415180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:26.656 [2024-10-15 14:00:40.415189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.656 [2024-10-15 14:00:40.415197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.656 [2024-10-15 14:00:40.415217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.656 [2024-10-15 14:00:40.415245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:26.656 [2024-10-15 14:00:40.415254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.656 [2024-10-15 14:00:40.415262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.916 [2024-10-15 14:00:40.491197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.916 [2024-10-15 14:00:40.491270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:26.916 [2024-10-15 14:00:40.491285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.916 [2024-10-15 14:00:40.491293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.916 [2024-10-15 14:00:40.553644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.916 [2024-10-15 14:00:40.553846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:26.916 [2024-10-15 14:00:40.553866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.916 [2024-10-15 14:00:40.553874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.916 [2024-10-15 14:00:40.553965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.916 [2024-10-15 14:00:40.553977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:26.916 [2024-10-15 14:00:40.553987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.916 [2024-10-15 14:00:40.553994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.916 [2024-10-15 14:00:40.554040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.916 [2024-10-15 14:00:40.554049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:26.916 [2024-10-15 14:00:40.554059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.916 [2024-10-15 14:00:40.554066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.916 [2024-10-15 14:00:40.554156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.916 [2024-10-15 14:00:40.554166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:26.916 [2024-10-15 14:00:40.554178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.916 [2024-10-15 14:00:40.554185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.916 [2024-10-15 14:00:40.554216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.916 [2024-10-15 14:00:40.554250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:26.916 [2024-10-15 14:00:40.554261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.916 [2024-10-15 14:00:40.554268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.916 [2024-10-15 14:00:40.554303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.916 [2024-10-15 14:00:40.554312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:26.916 [2024-10-15 14:00:40.554321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.916 [2024-10-15 14:00:40.554330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.916 [2024-10-15 14:00:40.554372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.916 [2024-10-15 14:00:40.554381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:26.916 [2024-10-15 14:00:40.554390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.916 [2024-10-15 14:00:40.554397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.916 [2024-10-15 14:00:40.554520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.464 ms, result 0 00:26:26.916 true 00:26:26.916 14:00:40 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 79272 00:26:26.917 14:00:40 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 79272 ']' 00:26:26.917 14:00:40 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 79272 00:26:26.917 14:00:40 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:26:26.917 14:00:40 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:26.917 14:00:40 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79272 00:26:26.917 killing process with pid 79272 00:26:26.917 14:00:40 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:26.917 14:00:40 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:26.917 14:00:40 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79272' 00:26:26.917 14:00:40 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 79272 00:26:26.917 14:00:40 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 79272 00:26:35.067 14:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:38.349 262144+0 records in 00:26:38.349 262144+0 records out 00:26:38.349 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.62509 s, 296 MB/s 00:26:38.349 14:00:51 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:40.246 14:00:53 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:40.246 [2024-10-15 14:00:53.723989] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:26:40.246 [2024-10-15 14:00:53.724088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79486 ] 00:26:40.246 [2024-10-15 14:00:53.868953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.246 [2024-10-15 14:00:53.983508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.505 [2024-10-15 14:00:54.256865] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:40.505 [2024-10-15 14:00:54.256935] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:40.764 [2024-10-15 14:00:54.410958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.764 [2024-10-15 14:00:54.411009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:40.764 [2024-10-15 14:00:54.411024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:40.764 [2024-10-15 14:00:54.411039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.764 [2024-10-15 14:00:54.411087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.764 [2024-10-15 14:00:54.411098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:40.764 [2024-10-15 14:00:54.411107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:40.764 [2024-10-15 14:00:54.411117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.764 [2024-10-15 14:00:54.411136] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:40.764 [2024-10-15 14:00:54.411846] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:40.764 [2024-10-15 14:00:54.411870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.764 [2024-10-15 14:00:54.411880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:40.764 [2024-10-15 14:00:54.411889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:26:40.764 [2024-10-15 14:00:54.411897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.764 [2024-10-15 14:00:54.413355] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:40.764 [2024-10-15 14:00:54.425972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.764 [2024-10-15 14:00:54.426007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:40.765 [2024-10-15 14:00:54.426021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.619 ms 00:26:40.765 [2024-10-15 14:00:54.426030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.765 [2024-10-15 14:00:54.426091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.765 [2024-10-15 14:00:54.426101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:40.765 [2024-10-15 14:00:54.426113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:40.765 [2024-10-15 14:00:54.426121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.765 [2024-10-15 14:00:54.432715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.765 [2024-10-15 14:00:54.432888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:40.765 [2024-10-15 14:00:54.432905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.530 ms 00:26:40.765 [2024-10-15 14:00:54.432913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.765 [2024-10-15 14:00:54.432992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.765 [2024-10-15 14:00:54.433002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:40.765 [2024-10-15 14:00:54.433012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:40.765 [2024-10-15 14:00:54.433020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.765 [2024-10-15 14:00:54.433068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.765 [2024-10-15 14:00:54.433079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:40.765 [2024-10-15 14:00:54.433087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:40.765 [2024-10-15 14:00:54.433095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.765 [2024-10-15 14:00:54.433121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:40.765 [2024-10-15 14:00:54.436882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.765 [2024-10-15 14:00:54.436910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:40.765 [2024-10-15 14:00:54.436920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.768 ms 00:26:40.765 [2024-10-15 14:00:54.436928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.765 [2024-10-15 14:00:54.436961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.765 [2024-10-15 14:00:54.436970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:40.765 [2024-10-15 14:00:54.436979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:40.765 [2024-10-15 14:00:54.436987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.765 [2024-10-15 14:00:54.437025] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:40.765 [2024-10-15 14:00:54.437046] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:40.765 [2024-10-15 14:00:54.437082] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:40.765 [2024-10-15 14:00:54.437099] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:40.765 [2024-10-15 14:00:54.437205] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:40.765 [2024-10-15 14:00:54.437216] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:40.765 [2024-10-15 14:00:54.437243] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:40.765 [2024-10-15 14:00:54.437253] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:40.765 [2024-10-15 14:00:54.437263] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:40.765 [2024-10-15 14:00:54.437271] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:40.765 [2024-10-15 14:00:54.437279] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:40.765 [2024-10-15 14:00:54.437287] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:40.765 [2024-10-15 14:00:54.437294] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:40.765 [2024-10-15 14:00:54.437302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.765 [2024-10-15 14:00:54.437313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:40.765 [2024-10-15 14:00:54.437321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:26:40.765 [2024-10-15 14:00:54.437328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.765 [2024-10-15 14:00:54.437414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.765 [2024-10-15 14:00:54.437423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:40.765 [2024-10-15 14:00:54.437430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:40.765 [2024-10-15 14:00:54.437437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.765 [2024-10-15 14:00:54.437541] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:40.765 [2024-10-15 14:00:54.437553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:40.765 [2024-10-15 14:00:54.437564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:40.765 [2024-10-15 14:00:54.437571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:40.765 [2024-10-15 14:00:54.437579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:40.765 [2024-10-15 14:00:54.437586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:40.765 [2024-10-15 14:00:54.437593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:40.765 [2024-10-15 14:00:54.437600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:40.765 [2024-10-15 14:00:54.437607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:40.765 [2024-10-15 14:00:54.437615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:40.765 [2024-10-15 14:00:54.437621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:40.765 [2024-10-15 14:00:54.437628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:40.765 [2024-10-15 14:00:54.437636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:40.765 [2024-10-15 14:00:54.437643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:40.765 [2024-10-15 14:00:54.437651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:40.765 [2024-10-15 14:00:54.437663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:40.765 [2024-10-15 14:00:54.437670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:40.765 [2024-10-15 14:00:54.437677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:40.765 [2024-10-15 14:00:54.437683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:40.765 [2024-10-15 14:00:54.437689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:40.765 [2024-10-15 14:00:54.437696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:40.765 [2024-10-15 14:00:54.437703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:40.765 [2024-10-15 14:00:54.437710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:40.765 [2024-10-15 14:00:54.437717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:40.765 [2024-10-15 14:00:54.437723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:40.765 [2024-10-15 14:00:54.437729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:40.765 [2024-10-15 14:00:54.437736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:40.765 [2024-10-15 14:00:54.437742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:40.765 [2024-10-15 14:00:54.437748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:40.765 [2024-10-15 14:00:54.437755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:40.765 [2024-10-15 14:00:54.437762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:40.765 [2024-10-15 14:00:54.437769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:40.765 [2024-10-15 14:00:54.437775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:40.765 [2024-10-15 14:00:54.437781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:40.765 [2024-10-15 14:00:54.437787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:40.765 [2024-10-15 14:00:54.437793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:40.765 [2024-10-15 14:00:54.437799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:40.765 [2024-10-15 14:00:54.437806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:40.765 [2024-10-15 14:00:54.437812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:40.765 [2024-10-15 14:00:54.437818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:40.765 [2024-10-15 14:00:54.437825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:40.765 [2024-10-15 14:00:54.437831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:40.765 [2024-10-15 14:00:54.437838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:40.765 [2024-10-15 14:00:54.437844] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:40.765 [2024-10-15 14:00:54.437852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:40.765 [2024-10-15 14:00:54.437859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:40.765 [2024-10-15 14:00:54.437866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:40.765 [2024-10-15 14:00:54.437874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:40.765 [2024-10-15 14:00:54.437881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:40.765 [2024-10-15 14:00:54.437888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:40.765 [2024-10-15 14:00:54.437895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:40.765 [2024-10-15 14:00:54.437901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:40.765 [2024-10-15 14:00:54.437908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:40.765 [2024-10-15 14:00:54.437917] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:40.765 [2024-10-15 14:00:54.437925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:40.765 [2024-10-15 14:00:54.437934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:40.765 [2024-10-15 14:00:54.437940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:40.765 [2024-10-15 14:00:54.437947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:40.765 [2024-10-15 14:00:54.437954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:40.765 [2024-10-15 14:00:54.437961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:40.766 [2024-10-15 14:00:54.437969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:40.766 [2024-10-15 14:00:54.437977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:40.766 [2024-10-15 14:00:54.437985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:40.766 [2024-10-15 14:00:54.437992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:40.766 [2024-10-15 14:00:54.437999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:40.766 [2024-10-15 14:00:54.438006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:40.766 [2024-10-15 14:00:54.438012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:40.766 [2024-10-15 14:00:54.438019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:40.766 [2024-10-15 14:00:54.438026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:40.766 [2024-10-15 14:00:54.438034] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:40.766 [2024-10-15 14:00:54.438042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:40.766 [2024-10-15 14:00:54.438052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:40.766 [2024-10-15 14:00:54.438058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:40.766 [2024-10-15 14:00:54.438065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:40.766 [2024-10-15 14:00:54.438072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:40.766 [2024-10-15 14:00:54.438079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.766 [2024-10-15 14:00:54.438088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:40.766 [2024-10-15 14:00:54.438095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:26:40.766 [2024-10-15 14:00:54.438104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.766 [2024-10-15 14:00:54.466833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.766 [2024-10-15 14:00:54.466871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:40.766 [2024-10-15 14:00:54.466883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.686 ms 00:26:40.766 [2024-10-15 14:00:54.466892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.766 [2024-10-15 14:00:54.466987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.766 [2024-10-15 14:00:54.467000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:40.766 [2024-10-15 14:00:54.467009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:40.766 [2024-10-15 14:00:54.467017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.766 [2024-10-15 14:00:54.516922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.766 [2024-10-15 14:00:54.516976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:40.766 [2024-10-15 14:00:54.516990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.844 ms 00:26:40.766 [2024-10-15 14:00:54.516999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.766 [2024-10-15 14:00:54.517055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.766 [2024-10-15 14:00:54.517065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:40.766 [2024-10-15 14:00:54.517074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:40.766 [2024-10-15 14:00:54.517082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.766 [2024-10-15 14:00:54.517482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.766 [2024-10-15 14:00:54.517504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:40.766 [2024-10-15 14:00:54.517513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:26:40.766 [2024-10-15 14:00:54.517520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.766 [2024-10-15 14:00:54.517647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.766 [2024-10-15 14:00:54.517656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:40.766 [2024-10-15 14:00:54.517665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:26:40.766 [2024-10-15 14:00:54.517672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.766 [2024-10-15 14:00:54.530571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.766 [2024-10-15 14:00:54.530742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:40.766 [2024-10-15 14:00:54.530761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.877 ms 00:26:40.766 [2024-10-15 14:00:54.530774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.766 [2024-10-15 14:00:54.543193] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:40.766 [2024-10-15 14:00:54.543246] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:40.766 [2024-10-15 14:00:54.543259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.766 [2024-10-15 14:00:54.543267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:40.766 [2024-10-15 14:00:54.543277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.378 ms 00:26:40.766 [2024-10-15 14:00:54.543284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.025 [2024-10-15 14:00:54.568010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.025 [2024-10-15 14:00:54.568065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:41.025 [2024-10-15 14:00:54.568078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.682 ms 00:26:41.025 [2024-10-15 14:00:54.568094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.025 [2024-10-15 14:00:54.579895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.025 [2024-10-15 14:00:54.579945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:41.025 [2024-10-15 14:00:54.579957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.733 ms 00:26:41.025 [2024-10-15 14:00:54.579964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.025 [2024-10-15 14:00:54.591379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.025 [2024-10-15 14:00:54.591418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:41.025 [2024-10-15 14:00:54.591429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.373 ms 00:26:41.025 [2024-10-15 14:00:54.591437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.025 [2024-10-15 14:00:54.592073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.025 [2024-10-15 14:00:54.592098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:41.025 [2024-10-15 14:00:54.592108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:26:41.025 [2024-10-15 14:00:54.592115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.025 [2024-10-15 14:00:54.646718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.025 [2024-10-15 14:00:54.646777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:41.025 [2024-10-15 14:00:54.646789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.584 ms 00:26:41.025 [2024-10-15 14:00:54.646798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.025 [2024-10-15 14:00:54.657426] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:41.025 [2024-10-15 14:00:54.660094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.025 [2024-10-15 14:00:54.660128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:41.025 [2024-10-15 14:00:54.660141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.241 ms 00:26:41.025 [2024-10-15 14:00:54.660150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.025 [2024-10-15 14:00:54.660263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.025 [2024-10-15 14:00:54.660274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:41.025 [2024-10-15 14:00:54.660283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:41.025 [2024-10-15 14:00:54.660290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.025 [2024-10-15 14:00:54.660354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.025 [2024-10-15 14:00:54.660367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:41.025 [2024-10-15 14:00:54.660375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:41.025 [2024-10-15 14:00:54.660383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.025 [2024-10-15 14:00:54.660400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.025 [2024-10-15 14:00:54.660408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:41.025 [2024-10-15 14:00:54.660416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:41.025 [2024-10-15 14:00:54.660423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.025 [2024-10-15 14:00:54.660453] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:41.025 [2024-10-15 14:00:54.660463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.025 [2024-10-15 14:00:54.660470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:41.025 [2024-10-15 14:00:54.660480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:41.025 [2024-10-15 14:00:54.660487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.025 [2024-10-15 14:00:54.683324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.025 [2024-10-15 14:00:54.683369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:41.025 [2024-10-15 14:00:54.683381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.820 ms 00:26:41.025 [2024-10-15 14:00:54.683389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.025 [2024-10-15 14:00:54.683465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.025 [2024-10-15 14:00:54.683476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:41.025 [2024-10-15 14:00:54.683485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:41.025 [2024-10-15 14:00:54.683492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.025 [2024-10-15 14:00:54.684435] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 273.051 ms, result 0 00:26:41.958  [2024-10-15T14:00:57.119Z] Copying: 45/1024 [MB] (45 MBps) [2024-10-15T14:00:58.053Z] Copying: 90/1024 [MB] (45 MBps) [2024-10-15T14:00:58.987Z] Copying: 135/1024 [MB] (45 MBps) [2024-10-15T14:00:59.921Z] Copying: 182/1024 [MB] (46 MBps) [2024-10-15T14:01:00.855Z] Copying: 226/1024 [MB] (44 MBps) [2024-10-15T14:01:01.789Z] Copying: 271/1024 [MB] (44 MBps) [2024-10-15T14:01:02.723Z] Copying: 317/1024 [MB] (45 MBps) [2024-10-15T14:01:04.095Z] Copying: 364/1024 [MB] (46 MBps) [2024-10-15T14:01:05.029Z] Copying: 415/1024 [MB] (51 MBps) [2024-10-15T14:01:05.963Z] Copying: 460/1024 [MB] (45 MBps) [2024-10-15T14:01:06.898Z] Copying: 506/1024 [MB] (45 MBps) [2024-10-15T14:01:07.850Z] Copying: 550/1024 [MB] (43 MBps) [2024-10-15T14:01:08.783Z] Copying: 592/1024 [MB] (42 MBps) [2024-10-15T14:01:09.717Z] Copying: 636/1024 [MB] (43 MBps) [2024-10-15T14:01:11.090Z] Copying: 681/1024 [MB] (45 MBps) [2024-10-15T14:01:12.024Z] Copying: 726/1024 [MB] (44 MBps) [2024-10-15T14:01:12.957Z] Copying: 771/1024 [MB] (45 MBps) [2024-10-15T14:01:13.891Z] Copying: 814/1024 [MB] (43 MBps) [2024-10-15T14:01:14.827Z] Copying: 859/1024 [MB] (45 MBps) [2024-10-15T14:01:15.760Z] Copying: 905/1024 [MB] (45 MBps) [2024-10-15T14:01:17.134Z] Copying: 951/1024 [MB] (45 MBps) [2024-10-15T14:01:17.392Z] Copying: 995/1024 [MB] (43 MBps) [2024-10-15T14:01:17.392Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-10-15 14:01:17.366894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.604 [2024-10-15 14:01:17.366938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:03.604 [2024-10-15 14:01:17.366952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:03.604 [2024-10-15 14:01:17.366960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.604 [2024-10-15 14:01:17.366980] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:03.604 [2024-10-15 14:01:17.369579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.604 [2024-10-15 14:01:17.369607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:03.604 [2024-10-15 14:01:17.369618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:27:03.604 [2024-10-15 14:01:17.369627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.604 [2024-10-15 14:01:17.370814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.604 [2024-10-15 14:01:17.370845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:03.604 [2024-10-15 14:01:17.370854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:27:03.604 [2024-10-15 14:01:17.370861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.604 [2024-10-15 14:01:17.370886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.604 [2024-10-15 14:01:17.370894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:03.604 [2024-10-15 14:01:17.370902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:03.604 [2024-10-15 14:01:17.370909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.604 [2024-10-15 14:01:17.370951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.604 [2024-10-15 14:01:17.370959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:03.604 [2024-10-15 14:01:17.370969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:03.604 [2024-10-15 14:01:17.370976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.604 [2024-10-15 14:01:17.370988] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:03.604 [2024-10-15 14:01:17.370999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:03.604 [2024-10-15 14:01:17.371286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:03.605 [2024-10-15 14:01:17.371770] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:03.605 [2024-10-15 14:01:17.371781] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9a01225a-bd24-4d03-b10f-ba453d044a70 00:27:03.605 [2024-10-15 14:01:17.371789] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:03.605 [2024-10-15 14:01:17.371796] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:03.605 [2024-10-15 14:01:17.371802] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:03.605 [2024-10-15 14:01:17.371809] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:03.605 [2024-10-15 14:01:17.371816] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:03.605 [2024-10-15 14:01:17.371825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:03.605 [2024-10-15 14:01:17.371833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:03.605 [2024-10-15 14:01:17.371838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:03.605 [2024-10-15 14:01:17.371845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:03.605 [2024-10-15 14:01:17.371851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.605 [2024-10-15 14:01:17.371858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:03.605 [2024-10-15 14:01:17.371866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:27:03.605 [2024-10-15 14:01:17.371872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.605 [2024-10-15 14:01:17.384090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.605 [2024-10-15 14:01:17.384116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:03.605 [2024-10-15 14:01:17.384127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.204 ms 00:27:03.605 [2024-10-15 14:01:17.384140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.605 [2024-10-15 14:01:17.384488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.605 [2024-10-15 14:01:17.384501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:03.605 [2024-10-15 14:01:17.384510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:27:03.605 [2024-10-15 14:01:17.384517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.864 [2024-10-15 14:01:17.416952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.864 [2024-10-15 14:01:17.416986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:03.864 [2024-10-15 14:01:17.417000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.864 [2024-10-15 14:01:17.417008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.864 [2024-10-15 14:01:17.417066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.864 [2024-10-15 14:01:17.417074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:03.864 [2024-10-15 14:01:17.417081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.864 [2024-10-15 14:01:17.417088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.864 [2024-10-15 14:01:17.417135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.864 [2024-10-15 14:01:17.417144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:03.864 [2024-10-15 14:01:17.417152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.864 [2024-10-15 14:01:17.417162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.864 [2024-10-15 14:01:17.417176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.864 [2024-10-15 14:01:17.417183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:03.864 [2024-10-15 14:01:17.417190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.864 [2024-10-15 14:01:17.417197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.864 [2024-10-15 14:01:17.494064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.864 [2024-10-15 14:01:17.494108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:03.864 [2024-10-15 14:01:17.494120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.864 [2024-10-15 14:01:17.494132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.864 [2024-10-15 14:01:17.557431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.864 [2024-10-15 14:01:17.557476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:03.864 [2024-10-15 14:01:17.557488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.864 [2024-10-15 14:01:17.557496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.864 [2024-10-15 14:01:17.557566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.864 [2024-10-15 14:01:17.557576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:03.864 [2024-10-15 14:01:17.557584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.864 [2024-10-15 14:01:17.557591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.864 [2024-10-15 14:01:17.557626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.864 [2024-10-15 14:01:17.557634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:03.864 [2024-10-15 14:01:17.557642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.864 [2024-10-15 14:01:17.557650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.864 [2024-10-15 14:01:17.557717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.864 [2024-10-15 14:01:17.557726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:03.864 [2024-10-15 14:01:17.557734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.864 [2024-10-15 14:01:17.557742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.864 [2024-10-15 14:01:17.557773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.864 [2024-10-15 14:01:17.557784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:03.864 [2024-10-15 14:01:17.557792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.864 [2024-10-15 14:01:17.557799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.864 [2024-10-15 14:01:17.557831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.864 [2024-10-15 14:01:17.557839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:03.864 [2024-10-15 14:01:17.557846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.864 [2024-10-15 14:01:17.557853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.864 [2024-10-15 14:01:17.557894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.864 [2024-10-15 14:01:17.557904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:03.864 [2024-10-15 14:01:17.557911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.864 [2024-10-15 14:01:17.557918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.864 [2024-10-15 14:01:17.558022] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 191.104 ms, result 0 00:27:05.762 00:27:05.762 00:27:05.762 14:01:19 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:05.763 [2024-10-15 14:01:19.135522] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:27:05.763 [2024-10-15 14:01:19.135645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79738 ] 00:27:05.763 [2024-10-15 14:01:19.287375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.763 [2024-10-15 14:01:19.387473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.022 [2024-10-15 14:01:19.643846] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:06.022 [2024-10-15 14:01:19.643906] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:06.022 [2024-10-15 14:01:19.796718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.022 [2024-10-15 14:01:19.796773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:06.022 [2024-10-15 14:01:19.796787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:06.022 [2024-10-15 14:01:19.796800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.022 [2024-10-15 14:01:19.796844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.022 [2024-10-15 14:01:19.796855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:06.022 [2024-10-15 14:01:19.796863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:06.022 [2024-10-15 14:01:19.796873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.022 [2024-10-15 14:01:19.796892] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:06.023 [2024-10-15 14:01:19.797538] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:06.023 [2024-10-15 14:01:19.797560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.023 [2024-10-15 14:01:19.797571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:06.023 [2024-10-15 14:01:19.797579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:27:06.023 [2024-10-15 14:01:19.797586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.023 [2024-10-15 14:01:19.797852] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:06.023 [2024-10-15 14:01:19.797880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.023 [2024-10-15 14:01:19.797889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:06.023 [2024-10-15 14:01:19.797897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:06.023 [2024-10-15 14:01:19.797907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.023 [2024-10-15 14:01:19.797946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.023 [2024-10-15 14:01:19.797954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:06.023 [2024-10-15 14:01:19.797962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:06.023 [2024-10-15 14:01:19.797969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.023 [2024-10-15 14:01:19.798274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.023 [2024-10-15 14:01:19.798291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:06.023 [2024-10-15 14:01:19.798301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:27:06.023 [2024-10-15 14:01:19.798309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.023 [2024-10-15 14:01:19.798369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.023 [2024-10-15 14:01:19.798378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:06.023 [2024-10-15 14:01:19.798385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:06.023 [2024-10-15 14:01:19.798392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.023 [2024-10-15 14:01:19.798412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.023 [2024-10-15 14:01:19.798421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:06.023 [2024-10-15 14:01:19.798429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:06.023 [2024-10-15 14:01:19.798438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.023 [2024-10-15 14:01:19.798454] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:06.023 [2024-10-15 14:01:19.801967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.023 [2024-10-15 14:01:19.801999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:06.023 [2024-10-15 14:01:19.802008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.516 ms 00:27:06.023 [2024-10-15 14:01:19.802016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.023 [2024-10-15 14:01:19.802045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.023 [2024-10-15 14:01:19.802053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:06.023 [2024-10-15 14:01:19.802061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:06.023 [2024-10-15 14:01:19.802068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.023 [2024-10-15 14:01:19.802106] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:06.023 [2024-10-15 14:01:19.802125] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:06.023 [2024-10-15 14:01:19.802161] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:06.023 [2024-10-15 14:01:19.802176] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:06.023 [2024-10-15 14:01:19.802286] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:06.023 [2024-10-15 14:01:19.802298] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:06.023 [2024-10-15 14:01:19.802307] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:06.023 [2024-10-15 14:01:19.802317] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:06.023 [2024-10-15 14:01:19.802326] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:06.023 [2024-10-15 14:01:19.802334] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:06.023 [2024-10-15 14:01:19.802344] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:06.023 [2024-10-15 14:01:19.802351] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:06.023 [2024-10-15 14:01:19.802358] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:06.023 [2024-10-15 14:01:19.802366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.023 [2024-10-15 14:01:19.802373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:06.023 [2024-10-15 14:01:19.802380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:27:06.023 [2024-10-15 14:01:19.802387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.023 [2024-10-15 14:01:19.802468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.023 [2024-10-15 14:01:19.802482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:06.023 [2024-10-15 14:01:19.802489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:06.023 [2024-10-15 14:01:19.802496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.023 [2024-10-15 14:01:19.802597] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:06.023 [2024-10-15 14:01:19.802613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:06.023 [2024-10-15 14:01:19.802622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:06.023 [2024-10-15 14:01:19.802629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.023 [2024-10-15 14:01:19.802637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:06.023 [2024-10-15 14:01:19.802644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:06.023 [2024-10-15 14:01:19.802651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:06.023 [2024-10-15 14:01:19.802658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:06.023 [2024-10-15 14:01:19.802665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:06.023 [2024-10-15 14:01:19.802672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:06.023 [2024-10-15 14:01:19.802678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:06.023 [2024-10-15 14:01:19.802684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:06.023 [2024-10-15 14:01:19.802691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:06.023 [2024-10-15 14:01:19.802699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:06.023 [2024-10-15 14:01:19.802705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:06.023 [2024-10-15 14:01:19.802711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.023 [2024-10-15 14:01:19.802718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:06.023 [2024-10-15 14:01:19.802730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:06.023 [2024-10-15 14:01:19.802736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.023 [2024-10-15 14:01:19.802743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:06.023 [2024-10-15 14:01:19.802749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:06.023 [2024-10-15 14:01:19.802755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.023 [2024-10-15 14:01:19.802761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:06.023 [2024-10-15 14:01:19.802767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:06.023 [2024-10-15 14:01:19.802774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.023 [2024-10-15 14:01:19.802780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:06.023 [2024-10-15 14:01:19.802786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:06.023 [2024-10-15 14:01:19.802793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.023 [2024-10-15 14:01:19.802800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:06.023 [2024-10-15 14:01:19.802807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:06.023 [2024-10-15 14:01:19.802813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.023 [2024-10-15 14:01:19.802819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:06.023 [2024-10-15 14:01:19.802825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:06.023 [2024-10-15 14:01:19.802831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:06.023 [2024-10-15 14:01:19.802838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:06.023 [2024-10-15 14:01:19.802845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:06.023 [2024-10-15 14:01:19.802851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:06.023 [2024-10-15 14:01:19.802857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:06.023 [2024-10-15 14:01:19.802864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:06.023 [2024-10-15 14:01:19.802870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.023 [2024-10-15 14:01:19.802877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:06.023 [2024-10-15 14:01:19.802883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:06.023 [2024-10-15 14:01:19.802889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.023 [2024-10-15 14:01:19.802895] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:06.023 [2024-10-15 14:01:19.802903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:06.023 [2024-10-15 14:01:19.802910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:06.023 [2024-10-15 14:01:19.802917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.023 [2024-10-15 14:01:19.802925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:06.023 [2024-10-15 14:01:19.802931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:06.024 [2024-10-15 14:01:19.802938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:06.024 [2024-10-15 14:01:19.802945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:06.024 [2024-10-15 14:01:19.802951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:06.024 [2024-10-15 14:01:19.802958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:06.024 [2024-10-15 14:01:19.802966] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:06.024 [2024-10-15 14:01:19.802977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:06.024 [2024-10-15 14:01:19.802985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:06.024 [2024-10-15 14:01:19.802992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:06.024 [2024-10-15 14:01:19.802999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:06.024 [2024-10-15 14:01:19.803005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:06.024 [2024-10-15 14:01:19.803012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:06.024 [2024-10-15 14:01:19.803019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:06.024 [2024-10-15 14:01:19.803026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:06.024 [2024-10-15 14:01:19.803033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:06.024 [2024-10-15 14:01:19.803039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:06.024 [2024-10-15 14:01:19.803046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:06.024 [2024-10-15 14:01:19.803053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:06.024 [2024-10-15 14:01:19.803060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:06.024 [2024-10-15 14:01:19.803067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:06.024 [2024-10-15 14:01:19.803074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:06.024 [2024-10-15 14:01:19.803081] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:06.024 [2024-10-15 14:01:19.803088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:06.024 [2024-10-15 14:01:19.803096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:06.024 [2024-10-15 14:01:19.803104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:06.024 [2024-10-15 14:01:19.803111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:06.024 [2024-10-15 14:01:19.803118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:06.024 [2024-10-15 14:01:19.803125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.024 [2024-10-15 14:01:19.803132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:06.024 [2024-10-15 14:01:19.803140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:27:06.024 [2024-10-15 14:01:19.803147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.826338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.826374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:06.283 [2024-10-15 14:01:19.826385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.153 ms 00:27:06.283 [2024-10-15 14:01:19.826392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.826470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.826478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:06.283 [2024-10-15 14:01:19.826486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:06.283 [2024-10-15 14:01:19.826497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.867252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.867298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:06.283 [2024-10-15 14:01:19.867311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.705 ms 00:27:06.283 [2024-10-15 14:01:19.867319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.867367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.867380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:06.283 [2024-10-15 14:01:19.867405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:06.283 [2024-10-15 14:01:19.867413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.867511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.867522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:06.283 [2024-10-15 14:01:19.867530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:06.283 [2024-10-15 14:01:19.867537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.867653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.867662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:06.283 [2024-10-15 14:01:19.867672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:27:06.283 [2024-10-15 14:01:19.867679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.880590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.880626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:06.283 [2024-10-15 14:01:19.880636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.895 ms 00:27:06.283 [2024-10-15 14:01:19.880643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.880759] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:06.283 [2024-10-15 14:01:19.880772] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:06.283 [2024-10-15 14:01:19.880782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.880790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:06.283 [2024-10-15 14:01:19.880800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:06.283 [2024-10-15 14:01:19.880807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.893035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.893067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:06.283 [2024-10-15 14:01:19.893077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.213 ms 00:27:06.283 [2024-10-15 14:01:19.893085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.893194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.893203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:06.283 [2024-10-15 14:01:19.893212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:27:06.283 [2024-10-15 14:01:19.893227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.893291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.893301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:06.283 [2024-10-15 14:01:19.893310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:27:06.283 [2024-10-15 14:01:19.893317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.893888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.893912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:06.283 [2024-10-15 14:01:19.893921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:27:06.283 [2024-10-15 14:01:19.893928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.893944] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:06.283 [2024-10-15 14:01:19.893955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.893962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:06.283 [2024-10-15 14:01:19.893970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:06.283 [2024-10-15 14:01:19.893977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.904835] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:06.283 [2024-10-15 14:01:19.904968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.904978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:06.283 [2024-10-15 14:01:19.904987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.974 ms 00:27:06.283 [2024-10-15 14:01:19.904995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.907035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.907059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:06.283 [2024-10-15 14:01:19.907070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.020 ms 00:27:06.283 [2024-10-15 14:01:19.907077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.907158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.907168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:06.283 [2024-10-15 14:01:19.907176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:06.283 [2024-10-15 14:01:19.907183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.907204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.283 [2024-10-15 14:01:19.907212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:06.283 [2024-10-15 14:01:19.907234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:06.283 [2024-10-15 14:01:19.907242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.283 [2024-10-15 14:01:19.907268] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:06.283 [2024-10-15 14:01:19.907278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.284 [2024-10-15 14:01:19.907285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:06.284 [2024-10-15 14:01:19.907292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:06.284 [2024-10-15 14:01:19.907299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.284 [2024-10-15 14:01:19.930699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.284 [2024-10-15 14:01:19.930740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:06.284 [2024-10-15 14:01:19.930752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.380 ms 00:27:06.284 [2024-10-15 14:01:19.930760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.284 [2024-10-15 14:01:19.930828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.284 [2024-10-15 14:01:19.930837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:06.284 [2024-10-15 14:01:19.930845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:06.284 [2024-10-15 14:01:19.930853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.284 [2024-10-15 14:01:19.932154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.032 ms, result 0 00:27:07.658  [2024-10-15T14:01:22.381Z] Copying: 48/1024 [MB] (48 MBps) [2024-10-15T14:01:23.328Z] Copying: 94/1024 [MB] (45 MBps) [2024-10-15T14:01:24.261Z] Copying: 141/1024 [MB] (46 MBps) [2024-10-15T14:01:25.196Z] Copying: 190/1024 [MB] (49 MBps) [2024-10-15T14:01:26.130Z] Copying: 236/1024 [MB] (45 MBps) [2024-10-15T14:01:27.504Z] Copying: 285/1024 [MB] (49 MBps) [2024-10-15T14:01:28.438Z] Copying: 333/1024 [MB] (48 MBps) [2024-10-15T14:01:29.375Z] Copying: 380/1024 [MB] (46 MBps) [2024-10-15T14:01:30.312Z] Copying: 425/1024 [MB] (45 MBps) [2024-10-15T14:01:31.284Z] Copying: 468/1024 [MB] (43 MBps) [2024-10-15T14:01:32.223Z] Copying: 512/1024 [MB] (43 MBps) [2024-10-15T14:01:33.157Z] Copying: 542/1024 [MB] (30 MBps) [2024-10-15T14:01:34.531Z] Copying: 595/1024 [MB] (52 MBps) [2024-10-15T14:01:35.463Z] Copying: 643/1024 [MB] (47 MBps) [2024-10-15T14:01:36.396Z] Copying: 692/1024 [MB] (49 MBps) [2024-10-15T14:01:37.329Z] Copying: 743/1024 [MB] (50 MBps) [2024-10-15T14:01:38.262Z] Copying: 797/1024 [MB] (53 MBps) [2024-10-15T14:01:39.195Z] Copying: 848/1024 [MB] (51 MBps) [2024-10-15T14:01:40.206Z] Copying: 895/1024 [MB] (47 MBps) [2024-10-15T14:01:41.146Z] Copying: 939/1024 [MB] (43 MBps) [2024-10-15T14:01:42.524Z] Copying: 971/1024 [MB] (32 MBps) [2024-10-15T14:01:42.782Z] Copying: 998/1024 [MB] (26 MBps) [2024-10-15T14:01:43.042Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-10-15 14:01:42.828203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.254 [2024-10-15 14:01:42.828369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:29.254 [2024-10-15 14:01:42.828406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:29.254 [2024-10-15 14:01:42.828430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.254 [2024-10-15 14:01:42.828491] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:29.254 [2024-10-15 14:01:42.840147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.254 [2024-10-15 14:01:42.840184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:29.254 [2024-10-15 14:01:42.840195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.618 ms 00:27:29.254 [2024-10-15 14:01:42.840203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.254 [2024-10-15 14:01:42.840442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.254 [2024-10-15 14:01:42.840453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:29.254 [2024-10-15 14:01:42.840462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:27:29.254 [2024-10-15 14:01:42.840469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.254 [2024-10-15 14:01:42.840495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.254 [2024-10-15 14:01:42.840506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:29.254 [2024-10-15 14:01:42.840515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:29.254 [2024-10-15 14:01:42.840521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.254 [2024-10-15 14:01:42.840567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.254 [2024-10-15 14:01:42.840576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:29.254 [2024-10-15 14:01:42.840585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:29.254 [2024-10-15 14:01:42.840592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.254 [2024-10-15 14:01:42.840605] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:29.254 [2024-10-15 14:01:42.840616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:29.254 [2024-10-15 14:01:42.840625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:29.254 [2024-10-15 14:01:42.840632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:29.254 [2024-10-15 14:01:42.840639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:29.254 [2024-10-15 14:01:42.840647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:29.254 [2024-10-15 14:01:42.840655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:29.254 [2024-10-15 14:01:42.840662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:29.254 [2024-10-15 14:01:42.840669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:29.254 [2024-10-15 14:01:42.840676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:29.254 [2024-10-15 14:01:42.840683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.840991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:29.255 [2024-10-15 14:01:42.841382] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:29.256 [2024-10-15 14:01:42.841389] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9a01225a-bd24-4d03-b10f-ba453d044a70 00:27:29.256 [2024-10-15 14:01:42.841399] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:29.256 [2024-10-15 14:01:42.841406] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:29.256 [2024-10-15 14:01:42.841413] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:29.256 [2024-10-15 14:01:42.841421] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:29.256 [2024-10-15 14:01:42.841427] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:29.256 [2024-10-15 14:01:42.841435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:29.256 [2024-10-15 14:01:42.841442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:29.256 [2024-10-15 14:01:42.841448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:29.256 [2024-10-15 14:01:42.841454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:29.256 [2024-10-15 14:01:42.841461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.256 [2024-10-15 14:01:42.841468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:29.256 [2024-10-15 14:01:42.841475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:27:29.256 [2024-10-15 14:01:42.841482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:42.853929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.256 [2024-10-15 14:01:42.853965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:29.256 [2024-10-15 14:01:42.853975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.432 ms 00:27:29.256 [2024-10-15 14:01:42.853983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:42.854341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.256 [2024-10-15 14:01:42.854351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:29.256 [2024-10-15 14:01:42.854360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:27:29.256 [2024-10-15 14:01:42.854370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:42.887126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.256 [2024-10-15 14:01:42.887162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:29.256 [2024-10-15 14:01:42.887171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.256 [2024-10-15 14:01:42.887179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:42.887245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.256 [2024-10-15 14:01:42.887254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:29.256 [2024-10-15 14:01:42.887262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.256 [2024-10-15 14:01:42.887271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:42.887330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.256 [2024-10-15 14:01:42.887340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:29.256 [2024-10-15 14:01:42.887348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.256 [2024-10-15 14:01:42.887355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:42.887369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.256 [2024-10-15 14:01:42.887377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:29.256 [2024-10-15 14:01:42.887384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.256 [2024-10-15 14:01:42.887391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:42.964806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.256 [2024-10-15 14:01:42.964849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:29.256 [2024-10-15 14:01:42.964859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.256 [2024-10-15 14:01:42.964867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:43.028245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.256 [2024-10-15 14:01:43.028286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:29.256 [2024-10-15 14:01:43.028297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.256 [2024-10-15 14:01:43.028304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:43.028368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.256 [2024-10-15 14:01:43.028377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:29.256 [2024-10-15 14:01:43.028385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.256 [2024-10-15 14:01:43.028392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:43.028427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.256 [2024-10-15 14:01:43.028435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:29.256 [2024-10-15 14:01:43.028443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.256 [2024-10-15 14:01:43.028450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:43.028520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.256 [2024-10-15 14:01:43.028537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:29.256 [2024-10-15 14:01:43.028545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.256 [2024-10-15 14:01:43.028551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:43.028579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.256 [2024-10-15 14:01:43.028587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:29.256 [2024-10-15 14:01:43.028595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.256 [2024-10-15 14:01:43.028601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:43.028633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.256 [2024-10-15 14:01:43.028644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:29.256 [2024-10-15 14:01:43.028651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.256 [2024-10-15 14:01:43.028658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:43.028693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.256 [2024-10-15 14:01:43.028702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:29.256 [2024-10-15 14:01:43.028709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.256 [2024-10-15 14:01:43.028716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.256 [2024-10-15 14:01:43.028818] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 200.627 ms, result 0 00:27:30.199 00:27:30.199 00:27:30.199 14:01:43 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:31.582 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:31.582 14:01:45 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:31.841 [2024-10-15 14:01:45.433528] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:27:31.841 [2024-10-15 14:01:45.434341] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80016 ] 00:27:31.841 [2024-10-15 14:01:45.593534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.127 [2024-10-15 14:01:45.701339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.386 [2024-10-15 14:01:45.957068] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:32.386 [2024-10-15 14:01:45.957126] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:32.386 [2024-10-15 14:01:46.114790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.386 [2024-10-15 14:01:46.114833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:32.386 [2024-10-15 14:01:46.114846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:32.386 [2024-10-15 14:01:46.114858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.386 [2024-10-15 14:01:46.114905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.386 [2024-10-15 14:01:46.114915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:32.386 [2024-10-15 14:01:46.114923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:32.386 [2024-10-15 14:01:46.114933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.386 [2024-10-15 14:01:46.114952] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:32.386 [2024-10-15 14:01:46.115672] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:32.386 [2024-10-15 14:01:46.115690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.386 [2024-10-15 14:01:46.115701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:32.386 [2024-10-15 14:01:46.115710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:27:32.386 [2024-10-15 14:01:46.115718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.386 [2024-10-15 14:01:46.115974] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:32.386 [2024-10-15 14:01:46.115996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.386 [2024-10-15 14:01:46.116004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:32.386 [2024-10-15 14:01:46.116013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:32.386 [2024-10-15 14:01:46.116022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.386 [2024-10-15 14:01:46.116060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.386 [2024-10-15 14:01:46.116069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:32.386 [2024-10-15 14:01:46.116077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:32.386 [2024-10-15 14:01:46.116084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.386 [2024-10-15 14:01:46.116379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.386 [2024-10-15 14:01:46.116391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:32.386 [2024-10-15 14:01:46.116402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:27:32.386 [2024-10-15 14:01:46.116409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.386 [2024-10-15 14:01:46.116471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.386 [2024-10-15 14:01:46.116479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:32.386 [2024-10-15 14:01:46.116486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:32.386 [2024-10-15 14:01:46.116493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.386 [2024-10-15 14:01:46.116513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.386 [2024-10-15 14:01:46.116521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:32.386 [2024-10-15 14:01:46.116529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:32.386 [2024-10-15 14:01:46.116535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.386 [2024-10-15 14:01:46.116554] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:32.386 [2024-10-15 14:01:46.120177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.386 [2024-10-15 14:01:46.120203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:32.386 [2024-10-15 14:01:46.120215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.626 ms 00:27:32.386 [2024-10-15 14:01:46.120233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.387 [2024-10-15 14:01:46.120264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.387 [2024-10-15 14:01:46.120273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:32.387 [2024-10-15 14:01:46.120280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:32.387 [2024-10-15 14:01:46.120287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.387 [2024-10-15 14:01:46.120326] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:32.387 [2024-10-15 14:01:46.120347] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:32.387 [2024-10-15 14:01:46.120381] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:32.387 [2024-10-15 14:01:46.120398] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:32.387 [2024-10-15 14:01:46.120499] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:32.387 [2024-10-15 14:01:46.120509] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:32.387 [2024-10-15 14:01:46.120519] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:32.387 [2024-10-15 14:01:46.120529] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:32.387 [2024-10-15 14:01:46.120538] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:32.387 [2024-10-15 14:01:46.120545] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:32.387 [2024-10-15 14:01:46.120552] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:32.387 [2024-10-15 14:01:46.120561] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:32.387 [2024-10-15 14:01:46.120568] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:32.387 [2024-10-15 14:01:46.120575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.387 [2024-10-15 14:01:46.120583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:32.387 [2024-10-15 14:01:46.120590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:27:32.387 [2024-10-15 14:01:46.120597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.387 [2024-10-15 14:01:46.120678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.387 [2024-10-15 14:01:46.120686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:32.387 [2024-10-15 14:01:46.120693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:32.387 [2024-10-15 14:01:46.120700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.387 [2024-10-15 14:01:46.120813] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:32.387 [2024-10-15 14:01:46.120825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:32.387 [2024-10-15 14:01:46.120833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:32.387 [2024-10-15 14:01:46.120841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.387 [2024-10-15 14:01:46.120849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:32.387 [2024-10-15 14:01:46.120856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:32.387 [2024-10-15 14:01:46.120862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:32.387 [2024-10-15 14:01:46.120870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:32.387 [2024-10-15 14:01:46.120877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:32.387 [2024-10-15 14:01:46.120883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:32.387 [2024-10-15 14:01:46.120890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:32.387 [2024-10-15 14:01:46.120897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:32.387 [2024-10-15 14:01:46.120903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:32.387 [2024-10-15 14:01:46.120909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:32.387 [2024-10-15 14:01:46.120916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:32.387 [2024-10-15 14:01:46.120922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.387 [2024-10-15 14:01:46.120929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:32.387 [2024-10-15 14:01:46.120940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:32.387 [2024-10-15 14:01:46.120946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.387 [2024-10-15 14:01:46.120952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:32.387 [2024-10-15 14:01:46.120959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:32.387 [2024-10-15 14:01:46.120965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.387 [2024-10-15 14:01:46.120971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:32.387 [2024-10-15 14:01:46.120978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:32.387 [2024-10-15 14:01:46.120984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.387 [2024-10-15 14:01:46.120990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:32.387 [2024-10-15 14:01:46.120997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:32.387 [2024-10-15 14:01:46.121003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.387 [2024-10-15 14:01:46.121009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:32.387 [2024-10-15 14:01:46.121015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:32.387 [2024-10-15 14:01:46.121021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.387 [2024-10-15 14:01:46.121027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:32.387 [2024-10-15 14:01:46.121034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:32.387 [2024-10-15 14:01:46.121040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:32.387 [2024-10-15 14:01:46.121046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:32.387 [2024-10-15 14:01:46.121052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:32.387 [2024-10-15 14:01:46.121058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:32.387 [2024-10-15 14:01:46.121065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:32.387 [2024-10-15 14:01:46.121071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:32.387 [2024-10-15 14:01:46.121077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.387 [2024-10-15 14:01:46.121083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:32.387 [2024-10-15 14:01:46.121089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:32.387 [2024-10-15 14:01:46.121096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.387 [2024-10-15 14:01:46.121103] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:32.387 [2024-10-15 14:01:46.121110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:32.387 [2024-10-15 14:01:46.121117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:32.387 [2024-10-15 14:01:46.121124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.387 [2024-10-15 14:01:46.121131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:32.387 [2024-10-15 14:01:46.121138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:32.387 [2024-10-15 14:01:46.121144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:32.387 [2024-10-15 14:01:46.121151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:32.387 [2024-10-15 14:01:46.121157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:32.387 [2024-10-15 14:01:46.121163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:32.387 [2024-10-15 14:01:46.121171] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:32.387 [2024-10-15 14:01:46.121180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.387 [2024-10-15 14:01:46.121190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:32.387 [2024-10-15 14:01:46.121197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:32.387 [2024-10-15 14:01:46.121203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:32.387 [2024-10-15 14:01:46.121210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:32.387 [2024-10-15 14:01:46.121217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:32.387 [2024-10-15 14:01:46.121238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:32.387 [2024-10-15 14:01:46.121245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:32.387 [2024-10-15 14:01:46.121252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:32.387 [2024-10-15 14:01:46.121259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:32.387 [2024-10-15 14:01:46.121267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:32.387 [2024-10-15 14:01:46.121273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:32.387 [2024-10-15 14:01:46.121280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:32.387 [2024-10-15 14:01:46.121287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:32.387 [2024-10-15 14:01:46.121294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:32.387 [2024-10-15 14:01:46.121301] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:32.387 [2024-10-15 14:01:46.121309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.387 [2024-10-15 14:01:46.121318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:32.387 [2024-10-15 14:01:46.121325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:32.387 [2024-10-15 14:01:46.121333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:32.387 [2024-10-15 14:01:46.121340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:32.387 [2024-10-15 14:01:46.121349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.387 [2024-10-15 14:01:46.121356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:32.387 [2024-10-15 14:01:46.121364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:27:32.388 [2024-10-15 14:01:46.121371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.388 [2024-10-15 14:01:46.144686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.388 [2024-10-15 14:01:46.144714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:32.388 [2024-10-15 14:01:46.144724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.275 ms 00:27:32.388 [2024-10-15 14:01:46.144731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.388 [2024-10-15 14:01:46.144809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.388 [2024-10-15 14:01:46.144817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:32.388 [2024-10-15 14:01:46.144825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:32.388 [2024-10-15 14:01:46.144832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.646 [2024-10-15 14:01:46.189952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.646 [2024-10-15 14:01:46.189987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:32.646 [2024-10-15 14:01:46.189999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.069 ms 00:27:32.646 [2024-10-15 14:01:46.190007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.646 [2024-10-15 14:01:46.190044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.646 [2024-10-15 14:01:46.190054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:32.646 [2024-10-15 14:01:46.190065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:32.646 [2024-10-15 14:01:46.190072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.646 [2024-10-15 14:01:46.190164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.646 [2024-10-15 14:01:46.190175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:32.646 [2024-10-15 14:01:46.190184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:32.646 [2024-10-15 14:01:46.190191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.646 [2024-10-15 14:01:46.190322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.646 [2024-10-15 14:01:46.190332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:32.646 [2024-10-15 14:01:46.190340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:27:32.646 [2024-10-15 14:01:46.190349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.646 [2024-10-15 14:01:46.203607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.646 [2024-10-15 14:01:46.203637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:32.646 [2024-10-15 14:01:46.203649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.240 ms 00:27:32.646 [2024-10-15 14:01:46.203657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.646 [2024-10-15 14:01:46.203778] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:32.646 [2024-10-15 14:01:46.203791] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:32.646 [2024-10-15 14:01:46.203800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.646 [2024-10-15 14:01:46.203809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:32.646 [2024-10-15 14:01:46.203817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:32.646 [2024-10-15 14:01:46.203826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.646 [2024-10-15 14:01:46.216090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.646 [2024-10-15 14:01:46.216116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:32.646 [2024-10-15 14:01:46.216127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.249 ms 00:27:32.646 [2024-10-15 14:01:46.216134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.646 [2024-10-15 14:01:46.216253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.646 [2024-10-15 14:01:46.216262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:32.646 [2024-10-15 14:01:46.216270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:27:32.646 [2024-10-15 14:01:46.216278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.646 [2024-10-15 14:01:46.216341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.646 [2024-10-15 14:01:46.216353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:32.646 [2024-10-15 14:01:46.216361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:27:32.646 [2024-10-15 14:01:46.216369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.646 [2024-10-15 14:01:46.216923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.646 [2024-10-15 14:01:46.216940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:32.646 [2024-10-15 14:01:46.216948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:27:32.646 [2024-10-15 14:01:46.216956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.646 [2024-10-15 14:01:46.216971] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:32.646 [2024-10-15 14:01:46.216980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.646 [2024-10-15 14:01:46.216990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:32.646 [2024-10-15 14:01:46.216997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:32.646 [2024-10-15 14:01:46.217004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.646 [2024-10-15 14:01:46.228906] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:32.646 [2024-10-15 14:01:46.229040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.646 [2024-10-15 14:01:46.229050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:32.646 [2024-10-15 14:01:46.229059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.019 ms 00:27:32.646 [2024-10-15 14:01:46.229066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.646 [2024-10-15 14:01:46.231176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.646 [2024-10-15 14:01:46.231199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:32.646 [2024-10-15 14:01:46.231211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:27:32.646 [2024-10-15 14:01:46.231229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.647 [2024-10-15 14:01:46.231325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.647 [2024-10-15 14:01:46.231337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:32.647 [2024-10-15 14:01:46.231347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:32.647 [2024-10-15 14:01:46.231355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.647 [2024-10-15 14:01:46.231378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.647 [2024-10-15 14:01:46.231387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:32.647 [2024-10-15 14:01:46.231396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:32.647 [2024-10-15 14:01:46.231408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.647 [2024-10-15 14:01:46.231435] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:32.647 [2024-10-15 14:01:46.231444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.647 [2024-10-15 14:01:46.231453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:32.647 [2024-10-15 14:01:46.231462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:32.647 [2024-10-15 14:01:46.231470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.647 [2024-10-15 14:01:46.257591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.647 [2024-10-15 14:01:46.257629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:32.647 [2024-10-15 14:01:46.257646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.100 ms 00:27:32.647 [2024-10-15 14:01:46.257654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.647 [2024-10-15 14:01:46.257732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.647 [2024-10-15 14:01:46.257747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:32.647 [2024-10-15 14:01:46.257759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:32.647 [2024-10-15 14:01:46.257766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.647 [2024-10-15 14:01:46.259001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 143.799 ms, result 0 00:27:33.586  [2024-10-15T14:01:48.312Z] Copying: 29/1024 [MB] (29 MBps) [2024-10-15T14:01:49.694Z] Copying: 49/1024 [MB] (20 MBps) [2024-10-15T14:01:50.634Z] Copying: 65/1024 [MB] (15 MBps) [2024-10-15T14:01:51.572Z] Copying: 89/1024 [MB] (24 MBps) [2024-10-15T14:01:52.515Z] Copying: 116/1024 [MB] (26 MBps) [2024-10-15T14:01:53.457Z] Copying: 133/1024 [MB] (17 MBps) [2024-10-15T14:01:54.398Z] Copying: 149/1024 [MB] (16 MBps) [2024-10-15T14:01:55.331Z] Copying: 163/1024 [MB] (13 MBps) [2024-10-15T14:01:56.703Z] Copying: 187/1024 [MB] (24 MBps) [2024-10-15T14:01:57.636Z] Copying: 205/1024 [MB] (18 MBps) [2024-10-15T14:01:58.568Z] Copying: 242/1024 [MB] (36 MBps) [2024-10-15T14:01:59.499Z] Copying: 263/1024 [MB] (20 MBps) [2024-10-15T14:02:00.432Z] Copying: 285/1024 [MB] (21 MBps) [2024-10-15T14:02:01.374Z] Copying: 300/1024 [MB] (15 MBps) [2024-10-15T14:02:02.316Z] Copying: 317/1024 [MB] (16 MBps) [2024-10-15T14:02:03.695Z] Copying: 334/1024 [MB] (17 MBps) [2024-10-15T14:02:04.637Z] Copying: 351/1024 [MB] (17 MBps) [2024-10-15T14:02:05.587Z] Copying: 365/1024 [MB] (13 MBps) [2024-10-15T14:02:06.527Z] Copying: 389/1024 [MB] (24 MBps) [2024-10-15T14:02:07.462Z] Copying: 421/1024 [MB] (31 MBps) [2024-10-15T14:02:08.415Z] Copying: 463/1024 [MB] (41 MBps) [2024-10-15T14:02:09.355Z] Copying: 506/1024 [MB] (43 MBps) [2024-10-15T14:02:10.292Z] Copying: 548/1024 [MB] (41 MBps) [2024-10-15T14:02:11.700Z] Copying: 592/1024 [MB] (43 MBps) [2024-10-15T14:02:12.633Z] Copying: 637/1024 [MB] (44 MBps) [2024-10-15T14:02:13.566Z] Copying: 680/1024 [MB] (43 MBps) [2024-10-15T14:02:14.501Z] Copying: 724/1024 [MB] (44 MBps) [2024-10-15T14:02:15.435Z] Copying: 769/1024 [MB] (45 MBps) [2024-10-15T14:02:16.371Z] Copying: 814/1024 [MB] (44 MBps) [2024-10-15T14:02:17.316Z] Copying: 863/1024 [MB] (48 MBps) [2024-10-15T14:02:18.689Z] Copying: 908/1024 [MB] (44 MBps) [2024-10-15T14:02:19.621Z] Copying: 953/1024 [MB] (45 MBps) [2024-10-15T14:02:20.603Z] Copying: 996/1024 [MB] (43 MBps) [2024-10-15T14:02:20.862Z] Copying: 1023/1024 [MB] (27 MBps) [2024-10-15T14:02:20.862Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-10-15 14:02:20.853275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.074 [2024-10-15 14:02:20.853319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:07.074 [2024-10-15 14:02:20.853338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:07.074 [2024-10-15 14:02:20.853345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.074 [2024-10-15 14:02:20.854356] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:07.074 [2024-10-15 14:02:20.858183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.074 [2024-10-15 14:02:20.858215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:07.074 [2024-10-15 14:02:20.858232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.809 ms 00:28:07.074 [2024-10-15 14:02:20.858239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.333 [2024-10-15 14:02:20.866460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.333 [2024-10-15 14:02:20.866492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:07.333 [2024-10-15 14:02:20.866504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.417 ms 00:28:07.333 [2024-10-15 14:02:20.866511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.333 [2024-10-15 14:02:20.866533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.333 [2024-10-15 14:02:20.866540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:07.333 [2024-10-15 14:02:20.866547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:07.333 [2024-10-15 14:02:20.866553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.333 [2024-10-15 14:02:20.866590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.333 [2024-10-15 14:02:20.866598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:07.333 [2024-10-15 14:02:20.866604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:07.333 [2024-10-15 14:02:20.866610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.333 [2024-10-15 14:02:20.866622] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:07.333 [2024-10-15 14:02:20.866630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129792 / 261120 wr_cnt: 1 state: open 00:28:07.333 [2024-10-15 14:02:20.866638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:07.333 [2024-10-15 14:02:20.866726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.866997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:07.334 [2024-10-15 14:02:20.867259] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:07.334 [2024-10-15 14:02:20.867266] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9a01225a-bd24-4d03-b10f-ba453d044a70 00:28:07.334 [2024-10-15 14:02:20.867272] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129792 00:28:07.334 [2024-10-15 14:02:20.867278] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129824 00:28:07.334 [2024-10-15 14:02:20.867284] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129792 00:28:07.334 [2024-10-15 14:02:20.867290] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:28:07.334 [2024-10-15 14:02:20.867296] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:07.334 [2024-10-15 14:02:20.867302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:07.334 [2024-10-15 14:02:20.867308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:07.334 [2024-10-15 14:02:20.867313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:07.334 [2024-10-15 14:02:20.867318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:07.334 [2024-10-15 14:02:20.867323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.334 [2024-10-15 14:02:20.867331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:07.335 [2024-10-15 14:02:20.867337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:28:07.335 [2024-10-15 14:02:20.867342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:20.876996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.335 [2024-10-15 14:02:20.877023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:07.335 [2024-10-15 14:02:20.877031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.643 ms 00:28:07.335 [2024-10-15 14:02:20.877038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:20.877323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.335 [2024-10-15 14:02:20.877337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:07.335 [2024-10-15 14:02:20.877345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:28:07.335 [2024-10-15 14:02:20.877350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:20.903911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.335 [2024-10-15 14:02:20.903941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:07.335 [2024-10-15 14:02:20.903949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.335 [2024-10-15 14:02:20.903958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:20.904005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.335 [2024-10-15 14:02:20.904012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:07.335 [2024-10-15 14:02:20.904018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.335 [2024-10-15 14:02:20.904024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:20.904057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.335 [2024-10-15 14:02:20.904064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:07.335 [2024-10-15 14:02:20.904070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.335 [2024-10-15 14:02:20.904076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:20.904090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.335 [2024-10-15 14:02:20.904097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:07.335 [2024-10-15 14:02:20.904103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.335 [2024-10-15 14:02:20.904111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:20.964924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.335 [2024-10-15 14:02:20.964959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:07.335 [2024-10-15 14:02:20.964968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.335 [2024-10-15 14:02:20.964978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:21.014120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.335 [2024-10-15 14:02:21.014158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:07.335 [2024-10-15 14:02:21.014167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.335 [2024-10-15 14:02:21.014177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:21.014248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.335 [2024-10-15 14:02:21.014257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:07.335 [2024-10-15 14:02:21.014264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.335 [2024-10-15 14:02:21.014270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:21.014297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.335 [2024-10-15 14:02:21.014306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:07.335 [2024-10-15 14:02:21.014312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.335 [2024-10-15 14:02:21.014317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:21.014375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.335 [2024-10-15 14:02:21.014382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:07.335 [2024-10-15 14:02:21.014388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.335 [2024-10-15 14:02:21.014394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:21.014412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.335 [2024-10-15 14:02:21.014420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:07.335 [2024-10-15 14:02:21.014426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.335 [2024-10-15 14:02:21.014431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:21.014458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.335 [2024-10-15 14:02:21.014465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:07.335 [2024-10-15 14:02:21.014471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.335 [2024-10-15 14:02:21.014476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:21.014508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.335 [2024-10-15 14:02:21.014519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:07.335 [2024-10-15 14:02:21.014525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.335 [2024-10-15 14:02:21.014530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.335 [2024-10-15 14:02:21.014619] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 163.586 ms, result 0 00:28:09.236 00:28:09.236 00:28:09.236 14:02:22 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:09.236 [2024-10-15 14:02:22.712125] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:28:09.236 [2024-10-15 14:02:22.712235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80391 ] 00:28:09.236 [2024-10-15 14:02:22.845452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.236 [2024-10-15 14:02:22.926647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.495 [2024-10-15 14:02:23.137028] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:09.495 [2024-10-15 14:02:23.137078] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:09.755 [2024-10-15 14:02:23.283843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.755 [2024-10-15 14:02:23.283884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:09.755 [2024-10-15 14:02:23.283895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:09.755 [2024-10-15 14:02:23.283905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.755 [2024-10-15 14:02:23.283939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.755 [2024-10-15 14:02:23.283947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:09.755 [2024-10-15 14:02:23.283954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:09.755 [2024-10-15 14:02:23.283961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.755 [2024-10-15 14:02:23.283975] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:09.755 [2024-10-15 14:02:23.284510] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:09.755 [2024-10-15 14:02:23.284522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.755 [2024-10-15 14:02:23.284531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:09.755 [2024-10-15 14:02:23.284537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:28:09.755 [2024-10-15 14:02:23.284543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.755 [2024-10-15 14:02:23.284740] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:09.755 [2024-10-15 14:02:23.284757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.755 [2024-10-15 14:02:23.284764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:09.755 [2024-10-15 14:02:23.284771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:09.755 [2024-10-15 14:02:23.284780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.755 [2024-10-15 14:02:23.284833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.755 [2024-10-15 14:02:23.284842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:09.755 [2024-10-15 14:02:23.284848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:09.755 [2024-10-15 14:02:23.284854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.755 [2024-10-15 14:02:23.285050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.755 [2024-10-15 14:02:23.285059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:09.755 [2024-10-15 14:02:23.285067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:28:09.755 [2024-10-15 14:02:23.285073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.755 [2024-10-15 14:02:23.285123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.755 [2024-10-15 14:02:23.285130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:09.755 [2024-10-15 14:02:23.285136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:09.755 [2024-10-15 14:02:23.285141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.755 [2024-10-15 14:02:23.285158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.755 [2024-10-15 14:02:23.285165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:09.755 [2024-10-15 14:02:23.285171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:09.755 [2024-10-15 14:02:23.285177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.755 [2024-10-15 14:02:23.285191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:09.755 [2024-10-15 14:02:23.288092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.755 [2024-10-15 14:02:23.288119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:09.755 [2024-10-15 14:02:23.288128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.903 ms 00:28:09.755 [2024-10-15 14:02:23.288134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.755 [2024-10-15 14:02:23.288159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.755 [2024-10-15 14:02:23.288167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:09.755 [2024-10-15 14:02:23.288173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:09.755 [2024-10-15 14:02:23.288178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.755 [2024-10-15 14:02:23.288210] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:09.755 [2024-10-15 14:02:23.288237] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:09.755 [2024-10-15 14:02:23.288265] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:09.755 [2024-10-15 14:02:23.288279] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:09.755 [2024-10-15 14:02:23.288360] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:09.755 [2024-10-15 14:02:23.288368] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:09.755 [2024-10-15 14:02:23.288376] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:09.755 [2024-10-15 14:02:23.288384] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:09.755 [2024-10-15 14:02:23.288391] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:09.755 [2024-10-15 14:02:23.288397] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:09.755 [2024-10-15 14:02:23.288403] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:09.755 [2024-10-15 14:02:23.288411] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:09.755 [2024-10-15 14:02:23.288416] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:09.755 [2024-10-15 14:02:23.288422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.755 [2024-10-15 14:02:23.288428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:09.755 [2024-10-15 14:02:23.288435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:28:09.755 [2024-10-15 14:02:23.288440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.755 [2024-10-15 14:02:23.288504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.755 [2024-10-15 14:02:23.288512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:09.755 [2024-10-15 14:02:23.288517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:09.755 [2024-10-15 14:02:23.288523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.755 [2024-10-15 14:02:23.288599] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:09.755 [2024-10-15 14:02:23.288607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:09.755 [2024-10-15 14:02:23.288613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:09.755 [2024-10-15 14:02:23.288619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.755 [2024-10-15 14:02:23.288625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:09.755 [2024-10-15 14:02:23.288630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:09.756 [2024-10-15 14:02:23.288635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:09.756 [2024-10-15 14:02:23.288640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:09.756 [2024-10-15 14:02:23.288646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:09.756 [2024-10-15 14:02:23.288651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:09.756 [2024-10-15 14:02:23.288656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:09.756 [2024-10-15 14:02:23.288661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:09.756 [2024-10-15 14:02:23.288666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:09.756 [2024-10-15 14:02:23.288673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:09.756 [2024-10-15 14:02:23.288679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:09.756 [2024-10-15 14:02:23.288684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.756 [2024-10-15 14:02:23.288689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:09.756 [2024-10-15 14:02:23.288699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:09.756 [2024-10-15 14:02:23.288704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.756 [2024-10-15 14:02:23.288709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:09.756 [2024-10-15 14:02:23.288715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:09.756 [2024-10-15 14:02:23.288720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:09.756 [2024-10-15 14:02:23.288725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:09.756 [2024-10-15 14:02:23.288730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:09.756 [2024-10-15 14:02:23.288735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:09.756 [2024-10-15 14:02:23.288740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:09.756 [2024-10-15 14:02:23.288746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:09.756 [2024-10-15 14:02:23.288751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:09.756 [2024-10-15 14:02:23.288755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:09.756 [2024-10-15 14:02:23.288760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:09.756 [2024-10-15 14:02:23.288765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:09.756 [2024-10-15 14:02:23.288770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:09.756 [2024-10-15 14:02:23.288775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:09.756 [2024-10-15 14:02:23.288781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:09.756 [2024-10-15 14:02:23.288786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:09.756 [2024-10-15 14:02:23.288791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:09.756 [2024-10-15 14:02:23.288796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:09.756 [2024-10-15 14:02:23.288800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:09.756 [2024-10-15 14:02:23.288805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:09.756 [2024-10-15 14:02:23.288810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.756 [2024-10-15 14:02:23.288815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:09.756 [2024-10-15 14:02:23.288820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:09.756 [2024-10-15 14:02:23.288825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.756 [2024-10-15 14:02:23.288830] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:09.756 [2024-10-15 14:02:23.288836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:09.756 [2024-10-15 14:02:23.288842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:09.756 [2024-10-15 14:02:23.288848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.756 [2024-10-15 14:02:23.288853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:09.756 [2024-10-15 14:02:23.288859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:09.756 [2024-10-15 14:02:23.288864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:09.756 [2024-10-15 14:02:23.288869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:09.756 [2024-10-15 14:02:23.288874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:09.756 [2024-10-15 14:02:23.288880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:09.756 [2024-10-15 14:02:23.288886] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:09.756 [2024-10-15 14:02:23.288893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:09.756 [2024-10-15 14:02:23.288901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:09.756 [2024-10-15 14:02:23.288906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:09.756 [2024-10-15 14:02:23.288912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:09.756 [2024-10-15 14:02:23.288917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:09.756 [2024-10-15 14:02:23.288923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:09.756 [2024-10-15 14:02:23.288928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:09.756 [2024-10-15 14:02:23.288934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:09.756 [2024-10-15 14:02:23.288939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:09.756 [2024-10-15 14:02:23.288944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:09.756 [2024-10-15 14:02:23.288950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:09.756 [2024-10-15 14:02:23.288956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:09.756 [2024-10-15 14:02:23.288961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:09.756 [2024-10-15 14:02:23.288967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:09.756 [2024-10-15 14:02:23.288972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:09.756 [2024-10-15 14:02:23.288978] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:09.756 [2024-10-15 14:02:23.288984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:09.756 [2024-10-15 14:02:23.288990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:09.756 [2024-10-15 14:02:23.288995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:09.756 [2024-10-15 14:02:23.289000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:09.756 [2024-10-15 14:02:23.289006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:09.756 [2024-10-15 14:02:23.289011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.756 [2024-10-15 14:02:23.289017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:09.756 [2024-10-15 14:02:23.289023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:28:09.756 [2024-10-15 14:02:23.289036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.756 [2024-10-15 14:02:23.307860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.756 [2024-10-15 14:02:23.307890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:09.756 [2024-10-15 14:02:23.307898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.793 ms 00:28:09.756 [2024-10-15 14:02:23.307904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.756 [2024-10-15 14:02:23.307973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.756 [2024-10-15 14:02:23.307981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:09.756 [2024-10-15 14:02:23.307987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:09.756 [2024-10-15 14:02:23.307993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.756 [2024-10-15 14:02:23.348985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.756 [2024-10-15 14:02:23.349021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:09.756 [2024-10-15 14:02:23.349031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.952 ms 00:28:09.756 [2024-10-15 14:02:23.349038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.756 [2024-10-15 14:02:23.349071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.756 [2024-10-15 14:02:23.349079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:09.756 [2024-10-15 14:02:23.349089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:09.756 [2024-10-15 14:02:23.349096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.756 [2024-10-15 14:02:23.349168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.756 [2024-10-15 14:02:23.349177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:09.756 [2024-10-15 14:02:23.349184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:09.756 [2024-10-15 14:02:23.349189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.756 [2024-10-15 14:02:23.349292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.756 [2024-10-15 14:02:23.349299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:09.756 [2024-10-15 14:02:23.349306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:28:09.756 [2024-10-15 14:02:23.349313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.756 [2024-10-15 14:02:23.359920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.756 [2024-10-15 14:02:23.359948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:09.756 [2024-10-15 14:02:23.359958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.592 ms 00:28:09.756 [2024-10-15 14:02:23.359964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.756 [2024-10-15 14:02:23.360052] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:09.756 [2024-10-15 14:02:23.360062] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:09.756 [2024-10-15 14:02:23.360070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.756 [2024-10-15 14:02:23.360077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:09.757 [2024-10-15 14:02:23.360083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:09.757 [2024-10-15 14:02:23.360090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.757 [2024-10-15 14:02:23.369618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.757 [2024-10-15 14:02:23.369643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:09.757 [2024-10-15 14:02:23.369651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.516 ms 00:28:09.757 [2024-10-15 14:02:23.369658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.757 [2024-10-15 14:02:23.369745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.757 [2024-10-15 14:02:23.369753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:09.757 [2024-10-15 14:02:23.369759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:28:09.757 [2024-10-15 14:02:23.369765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.757 [2024-10-15 14:02:23.369789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.757 [2024-10-15 14:02:23.369799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:09.757 [2024-10-15 14:02:23.369806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:28:09.757 [2024-10-15 14:02:23.369812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.757 [2024-10-15 14:02:23.370271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.757 [2024-10-15 14:02:23.370285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:09.757 [2024-10-15 14:02:23.370292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:28:09.757 [2024-10-15 14:02:23.370298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.757 [2024-10-15 14:02:23.370310] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:09.757 [2024-10-15 14:02:23.370318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.757 [2024-10-15 14:02:23.370327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:09.757 [2024-10-15 14:02:23.370333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:09.757 [2024-10-15 14:02:23.370339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.757 [2024-10-15 14:02:23.379067] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:09.757 [2024-10-15 14:02:23.379174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.757 [2024-10-15 14:02:23.379182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:09.757 [2024-10-15 14:02:23.379189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.812 ms 00:28:09.757 [2024-10-15 14:02:23.379196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.757 [2024-10-15 14:02:23.380938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.757 [2024-10-15 14:02:23.380961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:09.757 [2024-10-15 14:02:23.380969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:28:09.757 [2024-10-15 14:02:23.380978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.757 [2024-10-15 14:02:23.381030] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:28:09.757 [2024-10-15 14:02:23.381398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.757 [2024-10-15 14:02:23.381412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:09.757 [2024-10-15 14:02:23.381419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:28:09.757 [2024-10-15 14:02:23.381430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.757 [2024-10-15 14:02:23.381458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.757 [2024-10-15 14:02:23.381465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:09.757 [2024-10-15 14:02:23.381474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:09.757 [2024-10-15 14:02:23.381480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.757 [2024-10-15 14:02:23.381504] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:09.757 [2024-10-15 14:02:23.381512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.757 [2024-10-15 14:02:23.381518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:09.757 [2024-10-15 14:02:23.381524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:09.757 [2024-10-15 14:02:23.381529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.757 [2024-10-15 14:02:23.400015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.757 [2024-10-15 14:02:23.400047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:09.757 [2024-10-15 14:02:23.400056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.472 ms 00:28:09.757 [2024-10-15 14:02:23.400062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.757 [2024-10-15 14:02:23.400117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.757 [2024-10-15 14:02:23.400125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:09.757 [2024-10-15 14:02:23.400132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:09.757 [2024-10-15 14:02:23.400138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.757 [2024-10-15 14:02:23.400862] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.695 ms, result 0 00:28:11.144  [2024-10-15T14:02:25.876Z] Copying: 33/1024 [MB] (33 MBps) [2024-10-15T14:02:26.818Z] Copying: 53/1024 [MB] (19 MBps) [2024-10-15T14:02:27.761Z] Copying: 74/1024 [MB] (21 MBps) [2024-10-15T14:02:28.703Z] Copying: 104/1024 [MB] (29 MBps) [2024-10-15T14:02:29.646Z] Copying: 135/1024 [MB] (30 MBps) [2024-10-15T14:02:30.591Z] Copying: 157/1024 [MB] (21 MBps) [2024-10-15T14:02:31.978Z] Copying: 175/1024 [MB] (17 MBps) [2024-10-15T14:02:32.552Z] Copying: 191/1024 [MB] (16 MBps) [2024-10-15T14:02:33.940Z] Copying: 235/1024 [MB] (43 MBps) [2024-10-15T14:02:34.881Z] Copying: 261/1024 [MB] (26 MBps) [2024-10-15T14:02:35.815Z] Copying: 277/1024 [MB] (15 MBps) [2024-10-15T14:02:36.751Z] Copying: 298/1024 [MB] (21 MBps) [2024-10-15T14:02:37.707Z] Copying: 311/1024 [MB] (13 MBps) [2024-10-15T14:02:38.640Z] Copying: 324/1024 [MB] (12 MBps) [2024-10-15T14:02:39.573Z] Copying: 344/1024 [MB] (19 MBps) [2024-10-15T14:02:40.951Z] Copying: 358/1024 [MB] (14 MBps) [2024-10-15T14:02:41.883Z] Copying: 371/1024 [MB] (12 MBps) [2024-10-15T14:02:42.815Z] Copying: 392/1024 [MB] (21 MBps) [2024-10-15T14:02:43.749Z] Copying: 407/1024 [MB] (14 MBps) [2024-10-15T14:02:44.682Z] Copying: 419/1024 [MB] (11 MBps) [2024-10-15T14:02:45.616Z] Copying: 438/1024 [MB] (19 MBps) [2024-10-15T14:02:46.585Z] Copying: 448/1024 [MB] (10 MBps) [2024-10-15T14:02:47.959Z] Copying: 458/1024 [MB] (10 MBps) [2024-10-15T14:02:48.894Z] Copying: 470/1024 [MB] (11 MBps) [2024-10-15T14:02:49.828Z] Copying: 482/1024 [MB] (11 MBps) [2024-10-15T14:02:50.762Z] Copying: 494/1024 [MB] (12 MBps) [2024-10-15T14:02:51.695Z] Copying: 506/1024 [MB] (12 MBps) [2024-10-15T14:02:52.630Z] Copying: 519/1024 [MB] (12 MBps) [2024-10-15T14:02:53.561Z] Copying: 531/1024 [MB] (12 MBps) [2024-10-15T14:02:54.938Z] Copying: 544/1024 [MB] (13 MBps) [2024-10-15T14:02:55.880Z] Copying: 557/1024 [MB] (12 MBps) [2024-10-15T14:02:56.883Z] Copying: 568/1024 [MB] (11 MBps) [2024-10-15T14:02:57.821Z] Copying: 578/1024 [MB] (10 MBps) [2024-10-15T14:02:58.755Z] Copying: 592/1024 [MB] (13 MBps) [2024-10-15T14:02:59.695Z] Copying: 602/1024 [MB] (10 MBps) [2024-10-15T14:03:00.629Z] Copying: 613/1024 [MB] (10 MBps) [2024-10-15T14:03:01.564Z] Copying: 624/1024 [MB] (11 MBps) [2024-10-15T14:03:02.948Z] Copying: 634/1024 [MB] (10 MBps) [2024-10-15T14:03:03.891Z] Copying: 649/1024 [MB] (14 MBps) [2024-10-15T14:03:04.831Z] Copying: 675008/1048576 [kB] (9716 kBps) [2024-10-15T14:03:05.765Z] Copying: 684708/1048576 [kB] (9700 kBps) [2024-10-15T14:03:06.712Z] Copying: 679/1024 [MB] (10 MBps) [2024-10-15T14:03:07.645Z] Copying: 690/1024 [MB] (10 MBps) [2024-10-15T14:03:08.578Z] Copying: 701/1024 [MB] (10 MBps) [2024-10-15T14:03:09.954Z] Copying: 711/1024 [MB] (10 MBps) [2024-10-15T14:03:10.896Z] Copying: 723/1024 [MB] (11 MBps) [2024-10-15T14:03:11.841Z] Copying: 735/1024 [MB] (12 MBps) [2024-10-15T14:03:12.784Z] Copying: 762268/1048576 [kB] (9404 kBps) [2024-10-15T14:03:13.729Z] Copying: 771804/1048576 [kB] (9536 kBps) [2024-10-15T14:03:14.676Z] Copying: 781696/1048576 [kB] (9892 kBps) [2024-10-15T14:03:15.617Z] Copying: 773/1024 [MB] (10 MBps) [2024-10-15T14:03:16.553Z] Copying: 783/1024 [MB] (10 MBps) [2024-10-15T14:03:17.940Z] Copying: 794/1024 [MB] (10 MBps) [2024-10-15T14:03:18.883Z] Copying: 804/1024 [MB] (10 MBps) [2024-10-15T14:03:19.825Z] Copying: 834064/1048576 [kB] (10176 kBps) [2024-10-15T14:03:20.770Z] Copying: 843788/1048576 [kB] (9724 kBps) [2024-10-15T14:03:21.714Z] Copying: 853976/1048576 [kB] (10188 kBps) [2024-10-15T14:03:22.660Z] Copying: 864052/1048576 [kB] (10076 kBps) [2024-10-15T14:03:23.607Z] Copying: 856/1024 [MB] (12 MBps) [2024-10-15T14:03:24.550Z] Copying: 887212/1048576 [kB] (10172 kBps) [2024-10-15T14:03:25.926Z] Copying: 897156/1048576 [kB] (9944 kBps) [2024-10-15T14:03:26.869Z] Copying: 886/1024 [MB] (10 MBps) [2024-10-15T14:03:27.807Z] Copying: 896/1024 [MB] (10 MBps) [2024-10-15T14:03:28.755Z] Copying: 906/1024 [MB] (10 MBps) [2024-10-15T14:03:29.689Z] Copying: 917/1024 [MB] (10 MBps) [2024-10-15T14:03:30.628Z] Copying: 927/1024 [MB] (10 MBps) [2024-10-15T14:03:31.588Z] Copying: 938/1024 [MB] (10 MBps) [2024-10-15T14:03:32.543Z] Copying: 948/1024 [MB] (10 MBps) [2024-10-15T14:03:33.917Z] Copying: 961/1024 [MB] (12 MBps) [2024-10-15T14:03:34.855Z] Copying: 973/1024 [MB] (12 MBps) [2024-10-15T14:03:35.797Z] Copying: 986/1024 [MB] (12 MBps) [2024-10-15T14:03:36.736Z] Copying: 996/1024 [MB] (10 MBps) [2024-10-15T14:03:37.674Z] Copying: 1030660/1048576 [kB] (9884 kBps) [2024-10-15T14:03:38.606Z] Copying: 1040640/1048576 [kB] (9980 kBps) [2024-10-15T14:03:38.606Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-10-15 14:03:38.379170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.818 [2024-10-15 14:03:38.379424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:24.818 [2024-10-15 14:03:38.379491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:24.818 [2024-10-15 14:03:38.379518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.818 [2024-10-15 14:03:38.379564] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:24.818 [2024-10-15 14:03:38.383580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.818 [2024-10-15 14:03:38.383688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:24.818 [2024-10-15 14:03:38.383741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.971 ms 00:29:24.818 [2024-10-15 14:03:38.383765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.818 [2024-10-15 14:03:38.384023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.818 [2024-10-15 14:03:38.384051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:24.818 [2024-10-15 14:03:38.384076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:29:24.818 [2024-10-15 14:03:38.384128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.818 [2024-10-15 14:03:38.384173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.818 [2024-10-15 14:03:38.384196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:24.818 [2024-10-15 14:03:38.384229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:24.818 [2024-10-15 14:03:38.384251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.818 [2024-10-15 14:03:38.384314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.818 [2024-10-15 14:03:38.384352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:24.818 [2024-10-15 14:03:38.384373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:24.818 [2024-10-15 14:03:38.384397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.818 [2024-10-15 14:03:38.384424] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:24.818 [2024-10-15 14:03:38.384449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:29:24.818 [2024-10-15 14:03:38.384523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.384556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.384588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.384618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.384670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.384703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.384735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.384766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.384797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.384828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.384859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.384928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.384960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.384991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:24.818 [2024-10-15 14:03:38.385931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.385939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.385948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.385956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.385965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.385973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.385981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.385989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.385998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:24.819 [2024-10-15 14:03:38.386371] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:24.819 [2024-10-15 14:03:38.386380] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9a01225a-bd24-4d03-b10f-ba453d044a70 00:29:24.819 [2024-10-15 14:03:38.386388] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:29:24.819 [2024-10-15 14:03:38.386396] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1312 00:29:24.819 [2024-10-15 14:03:38.386404] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1280 00:29:24.819 [2024-10-15 14:03:38.386413] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0250 00:29:24.819 [2024-10-15 14:03:38.386420] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:24.819 [2024-10-15 14:03:38.386428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:24.819 [2024-10-15 14:03:38.386439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:24.819 [2024-10-15 14:03:38.386447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:24.819 [2024-10-15 14:03:38.386455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:24.819 [2024-10-15 14:03:38.386463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.819 [2024-10-15 14:03:38.386472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:24.819 [2024-10-15 14:03:38.386480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.040 ms 00:29:24.819 [2024-10-15 14:03:38.386488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.819 [2024-10-15 14:03:38.399817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.819 [2024-10-15 14:03:38.399849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:24.819 [2024-10-15 14:03:38.399860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.308 ms 00:29:24.819 [2024-10-15 14:03:38.399867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.819 [2024-10-15 14:03:38.400210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.819 [2024-10-15 14:03:38.400239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:24.819 [2024-10-15 14:03:38.400248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:29:24.819 [2024-10-15 14:03:38.400256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.819 [2024-10-15 14:03:38.432938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.819 [2024-10-15 14:03:38.432970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:24.819 [2024-10-15 14:03:38.432983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.819 [2024-10-15 14:03:38.432991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.819 [2024-10-15 14:03:38.433038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.819 [2024-10-15 14:03:38.433046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:24.819 [2024-10-15 14:03:38.433054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.819 [2024-10-15 14:03:38.433061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.819 [2024-10-15 14:03:38.433106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.819 [2024-10-15 14:03:38.433115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:24.819 [2024-10-15 14:03:38.433123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.819 [2024-10-15 14:03:38.433133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.819 [2024-10-15 14:03:38.433148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.819 [2024-10-15 14:03:38.433156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:24.819 [2024-10-15 14:03:38.433163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.819 [2024-10-15 14:03:38.433170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.819 [2024-10-15 14:03:38.508829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.819 [2024-10-15 14:03:38.508867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:24.819 [2024-10-15 14:03:38.508877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.819 [2024-10-15 14:03:38.508888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.819 [2024-10-15 14:03:38.570589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.819 [2024-10-15 14:03:38.570755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:24.819 [2024-10-15 14:03:38.570776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.819 [2024-10-15 14:03:38.570784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.819 [2024-10-15 14:03:38.570854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.819 [2024-10-15 14:03:38.570864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:24.819 [2024-10-15 14:03:38.570872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.819 [2024-10-15 14:03:38.570879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.819 [2024-10-15 14:03:38.570916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.819 [2024-10-15 14:03:38.570924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:24.819 [2024-10-15 14:03:38.570932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.819 [2024-10-15 14:03:38.570939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.820 [2024-10-15 14:03:38.571008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.820 [2024-10-15 14:03:38.571017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:24.820 [2024-10-15 14:03:38.571025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.820 [2024-10-15 14:03:38.571032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.820 [2024-10-15 14:03:38.571054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.820 [2024-10-15 14:03:38.571065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:24.820 [2024-10-15 14:03:38.571072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.820 [2024-10-15 14:03:38.571079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.820 [2024-10-15 14:03:38.571111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.820 [2024-10-15 14:03:38.571119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:24.820 [2024-10-15 14:03:38.571127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.820 [2024-10-15 14:03:38.571135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.820 [2024-10-15 14:03:38.571173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.820 [2024-10-15 14:03:38.571183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:24.820 [2024-10-15 14:03:38.571191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.820 [2024-10-15 14:03:38.571198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.820 [2024-10-15 14:03:38.571326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 192.134 ms, result 0 00:29:25.752 00:29:25.752 00:29:25.752 14:03:39 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:27.679 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:27.679 14:03:41 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:27.679 14:03:41 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:29:27.679 14:03:41 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:27.940 Process with pid 79272 is not found 00:29:27.940 Remove shared memory files 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 79272 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 79272 ']' 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 79272 00:29:27.940 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79272) - No such process 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 79272 is not found' 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_9a01225a-bd24-4d03-b10f-ba453d044a70_band_md /dev/hugepages/ftl_9a01225a-bd24-4d03-b10f-ba453d044a70_l2p_l1 /dev/hugepages/ftl_9a01225a-bd24-4d03-b10f-ba453d044a70_l2p_l2 /dev/hugepages/ftl_9a01225a-bd24-4d03-b10f-ba453d044a70_l2p_l2_ctx /dev/hugepages/ftl_9a01225a-bd24-4d03-b10f-ba453d044a70_nvc_md /dev/hugepages/ftl_9a01225a-bd24-4d03-b10f-ba453d044a70_p2l_pool /dev/hugepages/ftl_9a01225a-bd24-4d03-b10f-ba453d044a70_sb /dev/hugepages/ftl_9a01225a-bd24-4d03-b10f-ba453d044a70_sb_shm /dev/hugepages/ftl_9a01225a-bd24-4d03-b10f-ba453d044a70_trim_bitmap /dev/hugepages/ftl_9a01225a-bd24-4d03-b10f-ba453d044a70_trim_log /dev/hugepages/ftl_9a01225a-bd24-4d03-b10f-ba453d044a70_trim_md /dev/hugepages/ftl_9a01225a-bd24-4d03-b10f-ba453d044a70_vmap 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:29:27.940 ************************************ 00:29:27.940 END TEST ftl_restore_fast 00:29:27.940 ************************************ 00:29:27.940 00:29:27.940 real 3m8.332s 00:29:27.940 user 2m58.735s 00:29:27.940 sys 0m10.851s 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:27.940 14:03:41 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:27.940 Process with pid 72302 is not found 00:29:27.940 14:03:41 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:27.940 14:03:41 ftl -- ftl/ftl.sh@14 -- # killprocess 72302 00:29:27.940 14:03:41 ftl -- common/autotest_common.sh@950 -- # '[' -z 72302 ']' 00:29:27.940 14:03:41 ftl -- common/autotest_common.sh@954 -- # kill -0 72302 00:29:27.940 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72302) - No such process 00:29:27.940 14:03:41 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 72302 is not found' 00:29:27.940 14:03:41 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:29:27.940 14:03:41 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81210 00:29:27.940 14:03:41 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81210 00:29:27.940 14:03:41 ftl -- common/autotest_common.sh@831 -- # '[' -z 81210 ']' 00:29:27.940 14:03:41 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.940 14:03:41 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:27.940 14:03:41 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.940 14:03:41 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:27.940 14:03:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:27.941 14:03:41 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:27.941 [2024-10-15 14:03:41.712205] Starting SPDK v25.01-pre git sha1 5a8c76d99 / DPDK 24.03.0 initialization... 00:29:27.941 [2024-10-15 14:03:41.713034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81210 ] 00:29:28.202 [2024-10-15 14:03:41.872735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.461 [2024-10-15 14:03:41.999759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.026 14:03:42 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:29.026 14:03:42 ftl -- common/autotest_common.sh@864 -- # return 0 00:29:29.026 14:03:42 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:29.284 nvme0n1 00:29:29.284 14:03:42 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:29:29.284 14:03:42 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:29.284 14:03:42 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:29.541 14:03:43 ftl -- ftl/common.sh@28 -- # stores=f562e068-f494-4e5e-a957-5fd14d3eaf0f 00:29:29.541 14:03:43 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:29:29.541 14:03:43 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f562e068-f494-4e5e-a957-5fd14d3eaf0f 00:29:29.799 14:03:43 ftl -- ftl/ftl.sh@23 -- # killprocess 81210 00:29:29.799 14:03:43 ftl -- common/autotest_common.sh@950 -- # '[' -z 81210 ']' 00:29:29.799 14:03:43 ftl -- common/autotest_common.sh@954 -- # kill -0 81210 00:29:29.799 14:03:43 ftl -- common/autotest_common.sh@955 -- # uname 00:29:29.799 14:03:43 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:29.799 14:03:43 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81210 00:29:29.799 killing process with pid 81210 00:29:29.799 14:03:43 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:29.799 14:03:43 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:29.800 14:03:43 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81210' 00:29:29.800 14:03:43 ftl -- common/autotest_common.sh@969 -- # kill 81210 00:29:29.800 14:03:43 ftl -- common/autotest_common.sh@974 -- # wait 81210 00:29:31.179 14:03:44 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:31.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:31.439 Waiting for block devices as requested 00:29:31.439 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:31.696 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:31.696 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:31.696 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:36.964 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:36.964 Remove shared memory files 00:29:36.964 14:03:50 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:29:36.964 14:03:50 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:36.964 14:03:50 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:29:36.964 14:03:50 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:29:36.964 14:03:50 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:29:36.964 14:03:50 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:36.964 14:03:50 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:29:36.964 ************************************ 00:29:36.964 END TEST ftl 00:29:36.964 ************************************ 00:29:36.964 00:29:36.964 real 13m8.052s 00:29:36.964 user 15m19.821s 00:29:36.964 sys 1m15.124s 00:29:36.964 14:03:50 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:36.964 14:03:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:36.964 14:03:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:36.964 14:03:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:36.964 14:03:50 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:36.964 14:03:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:36.964 14:03:50 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:29:36.964 14:03:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:36.964 14:03:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:36.964 14:03:50 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:29:36.964 14:03:50 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:29:36.964 14:03:50 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:29:36.964 14:03:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:36.964 14:03:50 -- common/autotest_common.sh@10 -- # set +x 00:29:36.964 14:03:50 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:29:36.964 14:03:50 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:36.964 14:03:50 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:36.964 14:03:50 -- common/autotest_common.sh@10 -- # set +x 00:29:38.338 INFO: APP EXITING 00:29:38.338 INFO: killing all VMs 00:29:38.338 INFO: killing vhost app 00:29:38.338 INFO: EXIT DONE 00:29:38.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:38.854 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:38.854 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:38.854 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:29:38.854 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:29:39.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:39.680 Cleaning 00:29:39.680 Removing: /var/run/dpdk/spdk0/config 00:29:39.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:39.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:39.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:39.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:39.680 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:39.680 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:39.680 Removing: /var/run/dpdk/spdk0 00:29:39.680 Removing: /var/run/dpdk/spdk_pid56926 00:29:39.680 Removing: /var/run/dpdk/spdk_pid57128 00:29:39.680 Removing: /var/run/dpdk/spdk_pid57341 00:29:39.680 Removing: /var/run/dpdk/spdk_pid57439 00:29:39.680 Removing: /var/run/dpdk/spdk_pid57479 00:29:39.680 Removing: /var/run/dpdk/spdk_pid57596 00:29:39.680 Removing: /var/run/dpdk/spdk_pid57614 00:29:39.680 Removing: /var/run/dpdk/spdk_pid57807 00:29:39.680 Removing: /var/run/dpdk/spdk_pid57900 00:29:39.680 Removing: /var/run/dpdk/spdk_pid57996 00:29:39.681 Removing: /var/run/dpdk/spdk_pid58102 00:29:39.681 Removing: /var/run/dpdk/spdk_pid58188 00:29:39.681 Removing: /var/run/dpdk/spdk_pid58233 00:29:39.681 Removing: /var/run/dpdk/spdk_pid58264 00:29:39.681 Removing: /var/run/dpdk/spdk_pid58340 00:29:39.681 Removing: /var/run/dpdk/spdk_pid58424 00:29:39.681 Removing: /var/run/dpdk/spdk_pid58849 00:29:39.681 Removing: /var/run/dpdk/spdk_pid58912 00:29:39.681 Removing: /var/run/dpdk/spdk_pid58965 00:29:39.681 Removing: /var/run/dpdk/spdk_pid58981 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59072 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59088 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59179 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59195 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59248 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59266 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59319 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59337 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59492 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59528 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59612 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59784 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59868 00:29:39.681 Removing: /var/run/dpdk/spdk_pid59904 00:29:39.681 Removing: /var/run/dpdk/spdk_pid60332 00:29:39.681 Removing: /var/run/dpdk/spdk_pid60430 00:29:39.681 Removing: /var/run/dpdk/spdk_pid60552 00:29:39.681 Removing: /var/run/dpdk/spdk_pid60605 00:29:39.681 Removing: /var/run/dpdk/spdk_pid60625 00:29:39.681 Removing: /var/run/dpdk/spdk_pid60709 00:29:39.681 Removing: /var/run/dpdk/spdk_pid61337 00:29:39.681 Removing: /var/run/dpdk/spdk_pid61374 00:29:39.681 Removing: /var/run/dpdk/spdk_pid61862 00:29:39.681 Removing: /var/run/dpdk/spdk_pid61960 00:29:39.681 Removing: /var/run/dpdk/spdk_pid62075 00:29:39.681 Removing: /var/run/dpdk/spdk_pid62128 00:29:39.681 Removing: /var/run/dpdk/spdk_pid62154 00:29:39.681 Removing: /var/run/dpdk/spdk_pid62179 00:29:39.681 Removing: /var/run/dpdk/spdk_pid64016 00:29:39.681 Removing: /var/run/dpdk/spdk_pid64154 00:29:39.681 Removing: /var/run/dpdk/spdk_pid64158 00:29:39.681 Removing: /var/run/dpdk/spdk_pid64181 00:29:39.681 Removing: /var/run/dpdk/spdk_pid64225 00:29:39.681 Removing: /var/run/dpdk/spdk_pid64229 00:29:39.681 Removing: /var/run/dpdk/spdk_pid64241 00:29:39.938 Removing: /var/run/dpdk/spdk_pid64286 00:29:39.938 Removing: /var/run/dpdk/spdk_pid64290 00:29:39.938 Removing: /var/run/dpdk/spdk_pid64302 00:29:39.938 Removing: /var/run/dpdk/spdk_pid64347 00:29:39.938 Removing: /var/run/dpdk/spdk_pid64351 00:29:39.938 Removing: /var/run/dpdk/spdk_pid64363 00:29:39.938 Removing: /var/run/dpdk/spdk_pid65732 00:29:39.938 Removing: /var/run/dpdk/spdk_pid65835 00:29:39.938 Removing: /var/run/dpdk/spdk_pid67240 00:29:39.938 Removing: /var/run/dpdk/spdk_pid68647 00:29:39.938 Removing: /var/run/dpdk/spdk_pid68733 00:29:39.938 Removing: /var/run/dpdk/spdk_pid68818 00:29:39.938 Removing: /var/run/dpdk/spdk_pid68894 00:29:39.938 Removing: /var/run/dpdk/spdk_pid68999 00:29:39.938 Removing: /var/run/dpdk/spdk_pid69073 00:29:39.938 Removing: /var/run/dpdk/spdk_pid69215 00:29:39.938 Removing: /var/run/dpdk/spdk_pid69568 00:29:39.938 Removing: /var/run/dpdk/spdk_pid69600 00:29:39.938 Removing: /var/run/dpdk/spdk_pid70040 00:29:39.938 Removing: /var/run/dpdk/spdk_pid70226 00:29:39.938 Removing: /var/run/dpdk/spdk_pid70322 00:29:39.938 Removing: /var/run/dpdk/spdk_pid70437 00:29:39.938 Removing: /var/run/dpdk/spdk_pid70485 00:29:39.938 Removing: /var/run/dpdk/spdk_pid70506 00:29:39.938 Removing: /var/run/dpdk/spdk_pid70815 00:29:39.938 Removing: /var/run/dpdk/spdk_pid70872 00:29:39.938 Removing: /var/run/dpdk/spdk_pid70951 00:29:39.938 Removing: /var/run/dpdk/spdk_pid71352 00:29:39.938 Removing: /var/run/dpdk/spdk_pid71497 00:29:39.939 Removing: /var/run/dpdk/spdk_pid72302 00:29:39.939 Removing: /var/run/dpdk/spdk_pid72430 00:29:39.939 Removing: /var/run/dpdk/spdk_pid72599 00:29:39.939 Removing: /var/run/dpdk/spdk_pid72692 00:29:39.939 Removing: /var/run/dpdk/spdk_pid73002 00:29:39.939 Removing: /var/run/dpdk/spdk_pid73251 00:29:39.939 Removing: /var/run/dpdk/spdk_pid73597 00:29:39.939 Removing: /var/run/dpdk/spdk_pid73784 00:29:39.939 Removing: /var/run/dpdk/spdk_pid73915 00:29:39.939 Removing: /var/run/dpdk/spdk_pid73962 00:29:39.939 Removing: /var/run/dpdk/spdk_pid74072 00:29:39.939 Removing: /var/run/dpdk/spdk_pid74103 00:29:39.939 Removing: /var/run/dpdk/spdk_pid74150 00:29:39.939 Removing: /var/run/dpdk/spdk_pid74408 00:29:39.939 Removing: /var/run/dpdk/spdk_pid74645 00:29:39.939 Removing: /var/run/dpdk/spdk_pid75274 00:29:39.939 Removing: /var/run/dpdk/spdk_pid76067 00:29:39.939 Removing: /var/run/dpdk/spdk_pid76344 00:29:39.939 Removing: /var/run/dpdk/spdk_pid76706 00:29:39.939 Removing: /var/run/dpdk/spdk_pid76837 00:29:39.939 Removing: /var/run/dpdk/spdk_pid76924 00:29:39.939 Removing: /var/run/dpdk/spdk_pid77303 00:29:39.939 Removing: /var/run/dpdk/spdk_pid77361 00:29:39.939 Removing: /var/run/dpdk/spdk_pid77658 00:29:39.939 Removing: /var/run/dpdk/spdk_pid77945 00:29:39.939 Removing: /var/run/dpdk/spdk_pid78296 00:29:39.939 Removing: /var/run/dpdk/spdk_pid78407 00:29:39.939 Removing: /var/run/dpdk/spdk_pid78454 00:29:39.939 Removing: /var/run/dpdk/spdk_pid78507 00:29:39.939 Removing: /var/run/dpdk/spdk_pid78566 00:29:39.939 Removing: /var/run/dpdk/spdk_pid78627 00:29:39.939 Removing: /var/run/dpdk/spdk_pid78805 00:29:39.939 Removing: /var/run/dpdk/spdk_pid78874 00:29:39.939 Removing: /var/run/dpdk/spdk_pid78936 00:29:39.939 Removing: /var/run/dpdk/spdk_pid79031 00:29:39.939 Removing: /var/run/dpdk/spdk_pid79065 00:29:39.939 Removing: /var/run/dpdk/spdk_pid79128 00:29:39.939 Removing: /var/run/dpdk/spdk_pid79272 00:29:39.939 Removing: /var/run/dpdk/spdk_pid79486 00:29:39.939 Removing: /var/run/dpdk/spdk_pid79738 00:29:39.939 Removing: /var/run/dpdk/spdk_pid80016 00:29:39.939 Removing: /var/run/dpdk/spdk_pid80391 00:29:39.939 Removing: /var/run/dpdk/spdk_pid81210 00:29:39.939 Clean 00:29:39.939 14:03:53 -- common/autotest_common.sh@1451 -- # return 0 00:29:39.939 14:03:53 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:29:39.939 14:03:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:39.939 14:03:53 -- common/autotest_common.sh@10 -- # set +x 00:29:40.197 14:03:53 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:29:40.197 14:03:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:40.197 14:03:53 -- common/autotest_common.sh@10 -- # set +x 00:29:40.197 14:03:53 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:40.197 14:03:53 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:40.197 14:03:53 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:40.197 14:03:53 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:29:40.197 14:03:53 -- spdk/autotest.sh@394 -- # hostname 00:29:40.197 14:03:53 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:40.197 geninfo: WARNING: invalid characters removed from testname! 00:30:06.735 14:04:18 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:08.635 14:04:21 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:11.172 14:04:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:13.718 14:04:27 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:16.247 14:04:29 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:18.785 14:04:32 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:21.312 14:04:34 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:21.312 14:04:34 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:30:21.312 14:04:34 -- common/autotest_common.sh@1691 -- $ lcov --version 00:30:21.312 14:04:34 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:30:21.312 14:04:34 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:30:21.312 14:04:34 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:30:21.312 14:04:34 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:30:21.312 14:04:34 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:30:21.312 14:04:34 -- scripts/common.sh@336 -- $ IFS=.-: 00:30:21.312 14:04:34 -- scripts/common.sh@336 -- $ read -ra ver1 00:30:21.312 14:04:34 -- scripts/common.sh@337 -- $ IFS=.-: 00:30:21.312 14:04:34 -- scripts/common.sh@337 -- $ read -ra ver2 00:30:21.312 14:04:34 -- scripts/common.sh@338 -- $ local 'op=<' 00:30:21.312 14:04:34 -- scripts/common.sh@340 -- $ ver1_l=2 00:30:21.312 14:04:34 -- scripts/common.sh@341 -- $ ver2_l=1 00:30:21.312 14:04:34 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:30:21.312 14:04:34 -- scripts/common.sh@344 -- $ case "$op" in 00:30:21.312 14:04:34 -- scripts/common.sh@345 -- $ : 1 00:30:21.312 14:04:34 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:30:21.312 14:04:34 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.312 14:04:34 -- scripts/common.sh@365 -- $ decimal 1 00:30:21.312 14:04:34 -- scripts/common.sh@353 -- $ local d=1 00:30:21.312 14:04:34 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:30:21.312 14:04:34 -- scripts/common.sh@355 -- $ echo 1 00:30:21.312 14:04:34 -- scripts/common.sh@365 -- $ ver1[v]=1 00:30:21.312 14:04:34 -- scripts/common.sh@366 -- $ decimal 2 00:30:21.312 14:04:34 -- scripts/common.sh@353 -- $ local d=2 00:30:21.312 14:04:34 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:30:21.312 14:04:34 -- scripts/common.sh@355 -- $ echo 2 00:30:21.312 14:04:34 -- scripts/common.sh@366 -- $ ver2[v]=2 00:30:21.312 14:04:34 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:30:21.312 14:04:34 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:30:21.312 14:04:34 -- scripts/common.sh@368 -- $ return 0 00:30:21.312 14:04:34 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:21.312 14:04:34 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:30:21.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.312 --rc genhtml_branch_coverage=1 00:30:21.312 --rc genhtml_function_coverage=1 00:30:21.312 --rc genhtml_legend=1 00:30:21.312 --rc geninfo_all_blocks=1 00:30:21.312 --rc geninfo_unexecuted_blocks=1 00:30:21.312 00:30:21.312 ' 00:30:21.313 14:04:34 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:30:21.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.313 --rc genhtml_branch_coverage=1 00:30:21.313 --rc genhtml_function_coverage=1 00:30:21.313 --rc genhtml_legend=1 00:30:21.313 --rc geninfo_all_blocks=1 00:30:21.313 --rc geninfo_unexecuted_blocks=1 00:30:21.313 00:30:21.313 ' 00:30:21.313 14:04:34 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:30:21.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.313 --rc genhtml_branch_coverage=1 00:30:21.313 --rc genhtml_function_coverage=1 00:30:21.313 --rc genhtml_legend=1 00:30:21.313 --rc geninfo_all_blocks=1 00:30:21.313 --rc geninfo_unexecuted_blocks=1 00:30:21.313 00:30:21.313 ' 00:30:21.313 14:04:34 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:30:21.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.313 --rc genhtml_branch_coverage=1 00:30:21.313 --rc genhtml_function_coverage=1 00:30:21.313 --rc genhtml_legend=1 00:30:21.313 --rc geninfo_all_blocks=1 00:30:21.313 --rc geninfo_unexecuted_blocks=1 00:30:21.313 00:30:21.313 ' 00:30:21.313 14:04:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:21.313 14:04:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:30:21.313 14:04:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:21.313 14:04:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.313 14:04:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.313 14:04:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.313 14:04:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.313 14:04:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.313 14:04:34 -- paths/export.sh@5 -- $ export PATH 00:30:21.313 14:04:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.313 14:04:34 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:21.313 14:04:34 -- common/autobuild_common.sh@486 -- $ date +%s 00:30:21.313 14:04:34 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729001074.XXXXXX 00:30:21.313 14:04:34 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729001074.aIHHtG 00:30:21.313 14:04:34 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:30:21.313 14:04:34 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:30:21.313 14:04:34 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:30:21.313 14:04:34 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:21.313 14:04:34 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:21.313 14:04:34 -- common/autobuild_common.sh@502 -- $ get_config_params 00:30:21.313 14:04:34 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:30:21.313 14:04:34 -- common/autotest_common.sh@10 -- $ set +x 00:30:21.313 14:04:34 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:30:21.313 14:04:34 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:30:21.313 14:04:34 -- pm/common@17 -- $ local monitor 00:30:21.313 14:04:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:21.313 14:04:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:21.313 14:04:34 -- pm/common@25 -- $ sleep 1 00:30:21.313 14:04:34 -- pm/common@21 -- $ date +%s 00:30:21.313 14:04:34 -- pm/common@21 -- $ date +%s 00:30:21.313 14:04:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729001074 00:30:21.313 14:04:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729001074 00:30:21.313 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729001074_collect-cpu-load.pm.log 00:30:21.313 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729001074_collect-vmstat.pm.log 00:30:22.249 14:04:35 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:30:22.249 14:04:35 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:30:22.249 14:04:35 -- spdk/autopackage.sh@14 -- $ timing_finish 00:30:22.249 14:04:35 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:22.249 14:04:35 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:22.249 14:04:35 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:22.249 14:04:35 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:22.249 14:04:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:22.249 14:04:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:22.249 14:04:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:22.249 14:04:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:22.249 14:04:35 -- pm/common@44 -- $ pid=82913 00:30:22.249 14:04:35 -- pm/common@50 -- $ kill -TERM 82913 00:30:22.249 14:04:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:22.249 14:04:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:22.249 14:04:35 -- pm/common@44 -- $ pid=82914 00:30:22.249 14:04:35 -- pm/common@50 -- $ kill -TERM 82914 00:30:22.249 + [[ -n 5043 ]] 00:30:22.249 + sudo kill 5043 00:30:22.259 [Pipeline] } 00:30:22.275 [Pipeline] // timeout 00:30:22.280 [Pipeline] } 00:30:22.294 [Pipeline] // stage 00:30:22.299 [Pipeline] } 00:30:22.313 [Pipeline] // catchError 00:30:22.322 [Pipeline] stage 00:30:22.324 [Pipeline] { (Stop VM) 00:30:22.336 [Pipeline] sh 00:30:22.669 + vagrant halt 00:30:25.206 ==> default: Halting domain... 00:30:29.404 [Pipeline] sh 00:30:29.680 + vagrant destroy -f 00:30:31.627 ==> default: Removing domain... 00:30:32.211 [Pipeline] sh 00:30:32.496 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:30:32.507 [Pipeline] } 00:30:32.522 [Pipeline] // stage 00:30:32.528 [Pipeline] } 00:30:32.542 [Pipeline] // dir 00:30:32.547 [Pipeline] } 00:30:32.563 [Pipeline] // wrap 00:30:32.569 [Pipeline] } 00:30:32.583 [Pipeline] // catchError 00:30:32.593 [Pipeline] stage 00:30:32.596 [Pipeline] { (Epilogue) 00:30:32.609 [Pipeline] sh 00:30:32.894 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:38.171 [Pipeline] catchError 00:30:38.173 [Pipeline] { 00:30:38.185 [Pipeline] sh 00:30:38.468 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:38.468 Artifacts sizes are good 00:30:38.478 [Pipeline] } 00:30:38.491 [Pipeline] // catchError 00:30:38.501 [Pipeline] archiveArtifacts 00:30:38.508 Archiving artifacts 00:30:38.619 [Pipeline] cleanWs 00:30:38.633 [WS-CLEANUP] Deleting project workspace... 00:30:38.633 [WS-CLEANUP] Deferred wipeout is used... 00:30:38.669 [WS-CLEANUP] done 00:30:38.671 [Pipeline] } 00:30:38.686 [Pipeline] // stage 00:30:38.690 [Pipeline] } 00:30:38.702 [Pipeline] // node 00:30:38.707 [Pipeline] End of Pipeline 00:30:38.742 Finished: SUCCESS